Engineering: Theses and Dissertations
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Medium access control for collision-free flight in UAV formations.(2025) Samandari, AmeliaThe deployment of Unmanned Aerial Vehicles (UAVs) in autonomous formations requires accurate and timely communication of safety information. To achieve this, there is a need for a communication protocol that supports the timely and successful transfer of safety information between UAVs. This thesis focuses on the unacknowledged local broadcast of safety data that is periodically sent from each UAV to the other UAVs in close physical proximity. This is addressed using a Time Division Multiple Access (TDMA)-type MAC protocol. This thesis proposes three TDMA-based MAC protocols for UAVs in rigid formations, each addressing different network topologies. In rigid formations, the relative positions of the UAVs do not change throughout deployment. The proposed protocol designs in this thesis focus on overcoming two limitations of traditional TDMA: scalability and a single point of failure. Scalability refers to the need for each device to be allocated to a unique time slot. A single point of failure refers to reliance on a centralized controller. The first protocol, a centralized spatial reuse scheme, serves as a benchmark. With spatial reuse, multiple UAVs can be allocated to the same time slot. This protocol demonstrates the improvements achieved through utilizing spatial reuse compared to a traditional TDMA scheme in a specific formation deployment. It addresses the issue of scalability but is reliant on a centralized controller. The second protocol, Distributed Assignment and Resolution of Time slots (D-ART), supports allocation without a centralized controller in single-hop scenarios. This protocol is designed for a fully-connected UAV formation and provides distributed superframe adaption and self-allocation for single-hop communication. It addresses the single-point-of-failure issue but each UAV must be allocated to a unique time slot. The final protocol, Distributed Self-allocated Time slot Reuse (DSTR), extends these capabilities to multi-hop scenarios. It addresses both the scalability and single-point-of-failure limitations, and can be used in both single-hop and multi-hop scenarios. These proposed protocols address the essential task of communicating safety information in rigid UAV formations with different network topologies, enabling collision-free deployment of the formation. This is an important step for improving the safety and practicality of UAV formations in application scenarios that span a range of industries.Item Open Access Evaluating the techno-economic potential of combined geothermal, bioenergy and CO₂ removal cycles for power production and net-negative emissions for Aotearoa and beyond.(2025) Titus, K. A.If the strict global warming limits of 1.5°C and 2°C are surpassed, no amount of renewable energy deployment from there on can reverse carbon budget overshoot. The Intergovernmental Panel on Climate Change (IPCC) therefore incorporates methods and technologies that directly or indirectly remove CO₂ already residing in the atmosphere in their mitigation pathways. One nascent technology, bioenergy with carbon capture and storage (BECCS) is of particular interest. BECCS can supply the energy system with renewable electricity while storing biogenic CO₂ underground. Despite this promising potential, there is a large gap between current BECCS deployment (~0.5 MtCO₂/year), and the scale featured in some mitigation scenarios (up to 10 GtCO₂/year). One of the key challenges to BECCS projects is that the bioenergy process and the CO₂ injection process are in two distinct locations. Instead, I evaluate the novel process of applying BECCS to geothermal power plants to facilitate single-site power production and CO₂ removal via aqueous dissolution. Provided reservoir pressure is maintained, aqueous dissolution of CO₂ can mitigate the buoyancy concerns of injecting free-phase supercritical CO₂ into geological formations. In the past, geogenic and atmospheric CO₂ have been dissolved in reinjected geothermal fluid. The benefit of applying BECCS over these is that the bioenergy can be used to provide surface enthalpy control of the production fluid. The increased electricity from doing so, coupled with revenues from CO₂ removal, can improve the economic productivity of low-temperature, marginal geothermal systems. To test the merits of geothermal-BECCS, I first construct a systems model to relate the thermodynamic processes of hybrid geothermal-bioenergy with the CO₂ dissolution capacity of reinjected brine. On a per unit mass flow rate basis (1 kg/s), all geothermal-BECCS designs improved power production (62-589 kWe) over conventional geothermal (32-43 kWe). Additionally, these designs could achieve negative emissions intensities between -131 to -922 gCO₂/kWh. Then, I perform techno-economic analysis to compare the economic viability of geothermal-BECCS with conventional geothermal and geothermal-based direct air carbon capture and storage (DACCS). I perform uncertainty quantification through Monte-Carlo sampling of key design parameters (e.g. capital expenditure) coupled with sensitivity analysis of the market prices of CO₂, biofuel and electricity. With new geothermal-BECCS builds, it is possible to achieve scale of 1 MtCO₂/year in New Zealand at reasonable CO₂ prices ($100/tCO₂) using half of the country’s annual forestry residues (~0.79 Mt/year). Geothermal-BECCS outcompetes geothermal-DACCS as both an electricity generation and CDR technology for high-temperature fields. However, both are less effective at outlier high-gas systems (i.e. those found in Türkiye). Next, I apply this techno-economic systems model for a geothermal-BECCS retrofit case study at an existing geothermal plant, Ngāwhā power station. Here, a 1 MWe of net power increase results in 15.9 ktCO₂/year in negative emissions, using only 6% of the Far North’s annual forestry residues. Further CDR volumes are constrained due to existing geogenic CO₂ injection. Instead, the potential to utilize excess biogenic CO₂ as a carbon neutral feedstock presents another valuable opportunity. Then, I quantify the degree to which marginal hydrothermal systems are made viable by through geothermal-BECCS. I express this through a probability distribution function of the relative abundance of resources and calculate the cumulative distribution of potentially economic geothermal-BECCS systems. My results show that geothermal-BECCS can draw from lower temperature systems or fewer wells and still be cost-competitive against conventional geothermal. Further, out of 1562 prospective resources, between 345-1007 geothermal-BECCS designs could be considered viable. This exceeds conventional geothermal plants (178-292) by a considerable margin. Thus, if the market price of CO₂ is high enough, it is possible that geothermal-BECCS redefines what is considered a marginal geothermal resource. Finally, I estimate that the dissolution capacity of New Zealand’s untapped geothermal resources allows for up to 4 MtCO₂/year. This has the potential to offset more CO₂ than the country’s current rate of carbon budget overshoot. Revenues from CDR at geothermal systems can exceed that of other geothermal-X configurations such as the production of green hydrogen. The culmination of these findings suggests that geothermal-BECCS is suitable for a pilot demonstration in New Zealand or a similar mature geothermal economy with abundant waste biomass.Item Open Access Design of a mobile learning management application for parents of young children with learning difficulties in Sri Lanka.(2025) Hitihami Appuhamilage, Amaya NirmaniThis study addresses the needs of parents and caregivers seeking support for their children with learning difficulties in Sri Lanka. Research indicates that over 70% of Sri Lankan parents lack a clear understanding of how to support children facing these challenges, which adversely affects both the parents and their children. Children with learning disabilities who do not receive adequate support are more susceptible to mental health issues such as anxiety and depression. It is essential for parents to gain behavioural knowledge that emphasises positive reinforcement methods to enhance the behaviour of children with learning difficulties. Home-based support strategies, including goal setting, encouragement, and positive reinforcement, have proven to be highly effective. Currently, there is limited research on the experiences of parents managing children with specific learning disorders. Therefore, the aim of this study is to design a user-centred Mobile Learning Management System (LMS) application that offers knowledge, techniques, resources, and support for parents of children with learning difficulties. This research focuses on two primary questions: (1) What information and services should the LMS provide to parents and caregivers of young children experiencing learning difficulties? (2) How can a Mobile Learning Management System assist these parents and caregivers in supporting their children? To address these questions, the study presents research findings indicating that parents need a better understanding of learning difficulties and proper guidelines to navigate their child's challenges. Many parents exhibit negative attitudes and reactions towards their child’s diagnosis, including rejection, denial, overprotection, and hopelessness. Caring for a child with learning difficulties can impose major physical, personal, social, financial, and emotional burdens on parents. This research highlights parents’ experiences and the barriers they face in dealing with children who have learning difficulties, emphasising their lack of knowledge and the struggles they encounter. Through interviews with selected participants based on their backgrounds, the study explores the parents’ and caregivers’ perspectives, inner thoughts, and the assistance they require from this product. Drawing upon both the research findings and interview responses, the study aims to create a user-friendly mobile learning management system that meets the daily needs of parents and caregivers. By gaining knowledge, parents can better manage their concerns. Additionally, the study considers how to incorporate feedback from user interface testing to refine the final design of the application and evaluate the effectiveness of user experience-focused strategies. This LMS mobile application is crafted to offer mental and emotional support, catering particularly to working parents and those facing challenges with daily tasks related to their young children's learning difficulties. The goal of this support is to make a notable positive impact on the physical and emotional well-being of parents and caregivers. Consequently, children with special needs receive the attention and care they require to address their learning challenges effectively.Item Open Access 2D site response modeling with consideration of soil heterogeneity, non-vertical incidence and spatially-varying input motions.(2025) Eskandarighadi, MohammadSeismic site response plays a pivotal role in earthquake engineering, shaping the ground motion observed at the surface and influencing both seismic hazard assessments and structural design. While traditional one-dimensional site response analyses capture key effects such as resonance, impedance contrasts, damping, and nonlinear soil behavior, they cannot capture important multi-dimensional effects such as wave scattering, soil heterogeneity, non-vertical wave incidence, and spatially variable ground motions. This thesis addresses these shortcomings by developing and evaluating a comprehensive two-dimensional site response modeling framework for Treasure Island, a site located in the seismically active San Francisco Bay area. Leveraging advanced numerical simulation tools and high-performance computing resources, this work uses 2D finite element models to investigate the relative significance of three key factors that are often overlooked in simpler site response analysis approaches, including spatially correlated random fields, non-vertical wave incidence, and spatially variable ground motions. It also examines their individual and combined impacts on seismic site response. Comparisons with empirical transfer functions from the Treasure Island Downhole Array (TIDA) confirm that a purely 1D analysis can underpredict higher-frequency site amplification and overlook certain resonance peaks. Introducing random fields effectively captures variability in site response by representing heterogeneity in shear wave velocity. For sites with relatively uniform deposits, such as Treasure Island, consideration of random field models with excessive variability in shear wave velocity increases variability across realizations but also causes the average response to deviate significantly from empirical transfer functions. Accounting for non-vertical incidence provides additional benefits by better aligning low-frequency transfer function with ETF trends across larger frequency range, highlighting the role of wave orientation relative to site geometry. Spatially variable ground motions further enhance realism by incorporating variations in input motions across the domain, although at greater computational cost. Taken together, these modeling approaches, individually and in combination, offer insights into the interplay of geological variability and seismic wave characteristics, capturing elements of observed site response across the frequency spectrum. Key findings emphasize the need to balance model complexity with the actual geological and seismological conditions at a given site. In some instances, non-vertical incidence alone provides a cost-effective means of capturing low-frequency variability observed in empirical transfer functions by accounting for diverse propagation directions of ground motions. Averaging results across a range of incidence angles helps represent inherent variability, providing a more realistic depiction of site response. The alignment of incoming waves with the rock-soil interface was found to significantly influence the variability and amplification trends in the simulated results. Spatially variable ground motions introduce further variability in transfer functions, particularly at lower frequencies and around the fundamental mode. However, the added complexity and computational cost of this approach may not be necessary for simpler applications. For critical projects, investigating multiple coherency functions, including site-specific ones, could further refine variability and enhance modeling accuracy. When spatially correlated random fields are integrated with non-vertical incidence or spatially variable input motions, the interplay of these factors enhances variability and more effectively captures resonance effects, particularly the second peak associated with the fundamental mode. However, at higher frequencies, the added complexity does not consistently align with empirical observations. The effectiveness of these combined approaches depends on factors such as random field parameters, incidence angles, and wave orientation relative to bedrock topography. While combined models offer valuable insights, their computational cost may not always be justified when simpler models achieve comparable accuracy. The results further suggest that detailed site characterization, particularly of subsurface heterogeneity, combined with consideration of wave orientation in the analysis can help tailor modeling parameters and avoid over- or underestimating transfer functions. In conclusion, this thesis demonstrates the substantial benefits of moving beyond 1D models toward 2D seismic site response analyses that can explicitly account for soil heterogeneity, non-vertical waves, and spatially varying input motions. The modeling framework developed herein provides essential insights into how each of these factors, as well as their interactions, contributes to observed variability in empirical results. Although these methods involve greater computational effort, the more nuanced understanding of site-specific wave behavior makes them invaluable for highstakes applications, particularly in regions with complex site conditions or significant seismic hazards, such as those affecting critical infrastructure or requiring advanced seismic hazard evaluations. Recommendations for future work include refining random field parameters based on site-specific geophysical data, exploring multiple coherency functions for spatially variable ground motions, and potentially extending this framework to three-dimensional analyses for a fully realistic depiction of seismic wave propagation in complex geological environments.Item Open Access Participative simulation of logistic networks : methodological frameworks and New Zealand case study.(2025) Guiguet, Andres CarlosCONTEXT – The outstanding yield of modern food production relies on the silent role of logistic networks or supply chains in ensuring a reliable delivery of synthetic fertilizers to agricultural producers. Understanding the capabilities, strengths and vulnerabilities these systems, formed by intertwined human, social and technological layers, is key to ensuring food security and economic growth of countries. With simulation modelling quickly becoming the ubiquitous approach for analysis and design of logistic networks, a sparsity of methodological frameworks to support practitioners in the specific field of modelling logistic networks is starting to become clear. OBJECTIVE – The objective of this study is to forward methodological frameworks that foster an extended participation of stakeholders in the model development and validation processes. METHOD – The simulation process is broken down into a development, validation and use or exploration stage. Methodological frameworks to guide the development and validation stages are developed through an abductive and iterative approach relying on participative design principles, elements of grounded theory, and post-positivism worldview. These frameworks are then used in a case study involving a New Zealand based fertiliser distributer, where a simulation model is built and used to explore alternative scenarios involving internal and external changes. RESULTS – The study resulted in three independent frameworks, that were used together for the simulation of high-level implications of alternative scenarios envisioned by top management of a New Zealand business specialised in fertiliser distribution. The frameworks, provide guidance on the translation of business operations into a process-oriented (as opposed to object-oriented) software model, the representation of strategic decision-making instances in compact-modular form that can be embedded in larger models, and an alternative validation approach tailored to the use of historically acquired, instead of experimentally gathered data, all with a focus on fostering stakeholder participation as a means of improving representational accuracy. As for the case study, the development, validation and use of the simulation model allowed the exploration of resulting outcomes caused by changes in environmental circumstances and internal system configurations. ORIGINALITY – Multiple sources of originality stem from this research. First a method of improving the representational accuracy resulting from the translation of business processes in terms of process-interaction ontology embraced by common simulation software languages. Second a method for the modelling of decision-making instances considered strategic in a way these can be embedded in larger models while retaining localness through modularity. Third an alternative method for validating simulation models that allows interpretation of data to simultaneously be assessed and used for the validation process itself. Finally, a case study illustrating the possibility of using a simulation model which relies on high-level behaviour emerging from low-level interactions to aid strategic choice and understanding of a business’s strengths and weaknesses.Item Open Access A critical state approach to characterise the liquefaction potential of sand-gravel mixtures(2023) Chheda, Nainesh (Deena-Praful)Gravelly soils’ liquefaction has been documented since early 19th century with however the focus being sand-silts mixture – coarse documentation of this topic, that gravels do in fact liquefy was only acknowledged as an observation. With time, we have been impacted by earthquakes, EQ causing more damage to our property: life and environment-natural and built. In this realm of EQ related-damage the ground or soils in general act as buffer between the epicentre and the structures at a concerned site. Further, in this area, upon the eventual acknowledgement of liquefaction of soils as a problem, massive efforts were undertaken to understand its mechanics, what causes and thereby how to mitigate its ill-effects. Down that lane, gravelly soils’ liquefaction was another milestone covered in early 20th century, and thus regarded as a problematic subject. This being a fairly recent acknowledgement, efforts have initiated in this direction (or area of particle size under consideration-gravels>2mm), with this research outputs intended to complement that research for the betterment of our understanding of what’s happening and how shall we best address it, given the circumstances: socio (life) - environment (structures) - economic (cost or cost-“effectiveness’) and of course political (our “willingness” to want to address the problem). Case histories from at least 29 earthquakes worldwide have indicated that liquefaction can occur in gravelly soils (both in natural deposits and manmade reclamations) inducing large ground deformation and causing severe damage to civil infrastructures. However, the evaluation of the liquefaction resistance of gravelly soils remains to be a major challenge in geotechnical earthquake engineering. To date, laboratory tests aimed at evaluating the liquefaction resistance of gravelly soils are still very limited, as compared to the large body of investigations carried out on assessing the liquefaction resistance of sandy soils. While there is a general agreement that the liquefaction resistance of gravelly soils can be as low as that of clean sands, previous studies suggested that the liquefaction behaviour of gravelly soils is significantly affected by two key factors, namely relative density (Dr) and gravel content (Gc). While it is clear that the liquefaction resistance of gravels increases with the increasing Dr, there are inconclusive and/or contradictory results regarding the effect of Gc on the liquefaction resistance of gravelly soils. Aimed at addressing this important topic, an investigation is being currently carried out by researchers at the University of Canterbury, UC. As a first step, a series of undrained cyclic triaxial tests were conducted on selected sand-gravel mixtures (SGMs), and inter-grain state framework concepts such as the equivalent and skeleton void ratios were used to describe the joint effects of Gc and Dr on the liquefaction resistance of SGMs. Following such experimental effort, this study is aimed at providing new and useful insights, by developing a critical state-based method combined with the inter-grain state framework to uniquely describe the liquefaction resistance of gravelly soils. To do so, a series of monotonic drained triaxial tests will be carried out on selected SGMs. The outcomes of this study, combined with those obtained to date by UC researchers, will greatly contribute to the expansion of a worldwide assessment database, and also towards the development of a reliable liquefaction triggering procedure for characterising the liquefaction potential of gravelly soils, which is of paramount importance not only for the New Zealand context, but worldwide. This will make it possible for practising engineers to identify liquefiable gravelly soils in advance and make sound recommendations to minimise the impact of such hazards on land, and civil infrastructures.Item Open Access Multialgebras & related structures(1979) Nolan, Francis MauriceThis thesis investigates the properties of two algebraic structures - multialgebras and partially ordered universal algebras. Multialgebras generalize the concept of a universal algebra to multivalued operations. Unlike universal algebras however there are many types of homomorphism associated with a multialgebra. A full homomorphism is defined and compared with the usual strong homomorphism of multialgebras. While they are similar in structure, the category of multialgebras and full homomorphisms is the dual of a category of boolean ordered algebras. This yields a more natural theory than that of the analogous category equivalent to multialgebras and strong homomorphisms. Partially ordered algebras are universal algebras with a partially ordered base. They are treated from the view point of a universal algebra with an additional unary multi- operation (the partial ordering). In this fashion their peculiar properties can be explained by referring to either the universal algebra part or the multialgebra part. In particular the full and strong concepts of multialgebras are defined for the simpler structures and turn out to be dual notions.Item Open Access The seismic response of inelastic structures(1974) Sharpe, Richard DeaneBecause of the importance being placed by modern building codes of practice on the need for the deterministic analysis of ductile frames as a means of assessing their ability to withstand severe seismic ground motions, an investigation of the more important factors affecting such analyses has been made. The problems encountered in writing a comprehensive computer program, with which the sensitivity of a two-dimensional inelastic frame can be measured, are dealt with in depth. These extend from that arising from the obvious need to simplify the input data and printed results, to that of the selection of an economic (and sufficiently accurate) numerical integration technique which can be shown to remain stable over a realistic frequency range. The difficulties met in designing a beam-model which will exhibit a moment-curvature relationship that can be satisfactorily tracked, are described and a recommendation made as to which method should be used. The sensitivity of a selection of frames to different aspects of their modelling is investigated in order to provide guidance as to the complexity of modelling that is required for dynamic analyses. In an attempt to correlate the damaging potential of various earthquake accelerograms, so that they may be related to the requirements of modern building codes of practice, a variety of possible scaling criteria were tested. Although no firm conclusions are reached as to which criteria is to be preferred, a series of inelastic analyses are reported, in which the varying effect of different earthquakes can be seen. Finally, two examples of structures whose design benefited by their having deterministic dynamic inelastic analyses performed, are described, together with the computer program used.Item Open Access Development and evaluation of machine learning tools to process an internet forum for clinical research of polycystic ovary syndrome(2024) Emanuel, Rebecca H. K.This thesis explores the possibility of using internet forums and machine learning (ML) for clinical research, specifically through the exploration of the PCOS subreddit to research polycystic ovary syndrome (PCOS). PCOS is a heterogenous condition that is estimated to effect up to 21 % of reproductive aged people with ovaries. It is diagnosed through the presence of at least two of the following: biochemical or clinical hyperandrogenism, menstrual irregularities or ovulatory dysfunction, and polycystic ovarian morphology (PCOM). Approximately 75% of people with PCOS have impaired insulin sensitivity. This relationship between PCOS and insulin resistance is not restricted to people who are overweight or obese. PCOS is also associated with subsequent metabolic disorders, infertility, pregnancy complications, sleep disturbances, decreased mental health, endometrial cancer, eating disorders and sexual dysfunction. Phenotypes can be derived from the inclusive nature of the diagnostic criteria, but clinically relevant phenotypes have not been agreed upon. There is currently worldwide dissatisfaction with the treatment of PCOS and the available research on the subject.Item Open Access Strain-induced stressing of concrete storage tanks(1990) Vitharana, Nihal DhamsiriIn this thesis, the effects of strain-induced loading on concrete liquid storage reservoirs are examined with particular regard to temperature. Although the research is primarily directed to thermally induced loads, the formulations and findings are applicable to other strain-induced loadings such as shrinkage and swelling. The investigation incorporated both experimental and theoretical studies. Current methods for predicting moment-curvature responses and strain induced loads are evaluated with respect to convenience of use and accuracy, and the ACI 318-83 Branson type approach was found to be suitable. Design aids are presented for calculating section properties and strain-induced loads. A method is proposed for the effective moment-curvature response of concrete reservoir wall elements subjected to combined axial force and flexural moment. A parametric best-fit formulation is also derived for bi-linear effective moment-curvature representation. Six half-scale wall specimens were tested under mechanically applied and thermally induced uni-directional and bi-directional moments with and without simultaneous applied tensile thrust. The measured responses were compared with theoretical predictions. No significant difference. between uni-directional and bi-directional or mechanically and thermally induced loads was observed. These properties were incorporated in the non-linear analyses of cylindrical reservoirs, and the results were compared with the predictions from the usual methods. Current code provisions were found to be inaccurate being either conservative or unconservative in different parts of the wall for different conditions. A basic study into the stress redistribution effects of circular rings is also reported. The detailed analysis of reservoirs rectangular in plan under strain induced loadings was carried out using a finite element shell analysis program. Nine reservoir aspect ratios were considered. The stress redistribution effects are discussed. Design aids and examples are presented for thermal stresses on rectangular reservoirs subjected to different loading patterns. Recommendations for future research are proposed.Item Open Access Thermal and emission performance studies on premixed meso-combustors for thermophotovoltaic applications.(2025) Rong, HuiThis study focuses on the thermal and emission performance of small-size premixed combustors by investigating the combustion and flow characteristics of various carbon-free and classical hydrocarbon fuels, including ammonia, hydrogen, and methane. More attention is given to the influence of combustor structure and inlet parameters on thermodynamic and emission performance. These novel structural designs demonstrated obvious improvements in combustor performance, offering valuable insights into the optimization of small-scale combustion systems. One of the designs include a reverse flow single-channel inlet and double-channel outlet (SIDO) combustor aining to enhance its thermal performances. Increasing the inlet pressure (Pin) improves thermal performance and exergy efficiency while reducing nitrogen oxide emissions. Increasing the inlet velocity (Vin) can enhance the temperature uniformity of the combustor wall. Increasing the equivalence ratio (Φ) leads to a reduction of nitrogen oxide emissions, and the micro-combustor has better overall performance, when Φ = 1.0. Increasing the blending ration of hydrogen give rise to a decayed advection but enhanced diffusion, and the pressure loss (Ploss) can be reduced. Another design is applying porous medium (PM) in the small-size combustors. In comparison with the system without PM, the application of PM is found to lead to a significant improvement on thermal performances. It is found that there is a substantial 37.5% reduction in the standard deviation of the outer wall temperature (ST,W) at Vin = 2.0 m/s. The optimal thermal performance is achieved as Φ=0.9. A higher porosity (σ) gives rise to a lower entropy production within the PM. The lowest entropy production resulting from heat conduction is shown to be achieved when σ = 0.8. By implementing PM, the exergy efficiency (ηexergy) is found to be increased by 23.9% at Vin = 2.0 m/s. In general, this present investigation shed physical insights on the entropy production and thermodynamic exergy performances of ammonia/methane-fueled micro-combustion systems with and without PM. For comparison, we proposed and studied a double-channel inlet and double-channel outlet (DIDO) combustor, which is shown to be capable of generating a vortex at the outlet, thereby reducing NOx emissions. Specifically, at a ammonia volumetric flow rate of 900 mL/min, the NO concentration at the outlet can be curtailed by 29.23%. The DIDO combustor yields a substantial enhancement in thermal performance, achieving a 51% reduction in ST,W when ammonia volumetric flow rate is set at 500 mL/min which significantly enhances the uniformity of wall temperature. The peak of thermal performance and maximum radiation efficiency (ηradiation) is reached at Φ = 0.9. Finally, we proposed and investigated a reverse-flow Tesla channel applied in a counter-flow combustor. It is found that such structured combustor has a remarkable improvement of 72.6% to the combustor wall temperature at hydrogen volume flow rate of 100 mL/min. The diodicity (Di) of the Tesla valve is found to be increased with higher hydrogen volume flow rate, and a lower Φ contributes to a higher Di. Besides, Di decreases when Φ goes up, stabilizing at Φ = 0.9. The reverse-flow Tesla valve exhibits a more uniform pressure distribution and entropy production than the forward-flow Tesla valve. At Φ = 0.9, the hydrogen-to-air ratio maximized heat release, producing the highest entropy. Tesla-valve structured combustors demonstrate near complete combustion before Φ reaching 0.9, the combustion efficiency (ηcombustion) gradually decreasing after Φ getting to 1.0. Additionally, the effect of blending ammonia with various ratio of hydrogen was studied. To achieve a stable ammonia-hydrogen combustion within the reverse and forward flow Tesla valves, ammonia ratio can reach 20% for the reverse flow Tesla valve, whereas the ratio for a stable combustion in the forward flow configuration is 10%. The increased flow resistance inside the reverse flow structure promotes more complete fuel-depleted combustion, thereby increasing the wall temperature. In contrast, the forward flow structure, due to its lower flow resistance, extends the flame area of the mixed fuel, thereby improving wall temperature uniformity. The Double-layer Tesla Valve structure improves wall temperature uniformity by over 55% across varying flow rates. Both double-layer and single-layer structures demonstrate a significant enhancement in the combustor's thermal performance and overall performances characterized with Nusselt and Peclet numbers.Item Open Access Forming metal organic framework glass membranes for gas separation.(2025) Stone, Dana M.This thesis investigates the fundamental factors affecting the use of glass transformations to repair defective crystalline metal organic framework (MOF) membranes. This work aims to clarify our understanding of the major limitation that prevents MOF materials being used for gas separation membranes: intercrystalline defects. The research aimed to produce MOF membranes within tubular ceramic supports and evaluate the differences in gas separation performance between crystalline and glass (ag) forms. A four-stage method was adapted to produce glass membranes, including the use of an -alumina tubular support, ZnO precursor deposition via Atomic Layer Deposition (ALD), in-situ solvothermal synthesis of ZIF-62, and defect healing through glass transformation. The gas separation performance of ZIF-62 and agZIF-62 membranes showed low permeance (10⁻⁸ & 10⁻¹⁰ mol m⁻²s⁻¹Pa⁻¹ respectively) and selectivities (e.g. H₂/CO₂ of 3.5 & 4.3 respectively) which were seen as significant areas for improvement. To address the low permeance, ALD conditions to control membrane thickness were developed, reducing crystalline membranes thicknesses from 38 μm to 16 μm. The glass transition process led to further membrane thinning, down to 2 μm, due to a capillary effect. To address low selectivity, the relationship between isothermal hold times, porosity, and macroscopic melting was examined via PALS, adsorption studies, and visual imaging. These results showed that only limited retention of porosity was possible, and that isothermal treatments offered no control over pore structure. These results highlighted challenges in reproducibility during glass transformation by revealing the variability in agZIF-62 samples, including pore aperture size which ranged from 3.2 to 3.7 Å. Finally, a pioneering alternative synthesis approach using chemical vapor deposition (CVD) was explored for the multi-ligand ZIF-62 to address the poor quality of ZIF-62 membranes. However, the polymorphic nature and high energy state of ZIF-62 prevented its synthesis via CVD, instead resulting in the formation of the dense ZIF-zni. Overall, this research provides foundational insights for enhancing the performance and scalability of ZIF-based membranes for gas separation.Item Open Access A new approach to model locomotive kit set instructional design.(2025) Bennet, CelynModel locomotive construction offers a rewarding and intellectually stimulating activity within the broader context of railway modelling. Despite its benefits for cognitive and manual skill development, model locomotive construction is increasingly overshadowed by the prevalence of ready-made models. One reason they are overshadowed is that there is a significant barrier to entry for newcomers. This barrier arises from the lack of adequate support offered by commercially available “beginner” model locomotive kit sets, which do not address the challenges newcomers encounter during the construction process. This research investigates the challenges associated with model locomotive construction for four user groups (Non-Makers, Makers, Railway Modellers, and Locomotive Constructors) and proposes a redesigned instructional and product approach to bridge the identified gap between railway modelling and locomotive construction. Using the Stanford design methodology, this research empathised with novice constructors, investigated product and instructional design principles, as well as iterating through ideation and prototyping, to create a product that supports users who are participating for the first time. User observation sessions were conducted across a range of participants from all skill levels using a commercially available model locomotive kit set. This identified that Non-Makers, Makers, Railway Modellers, and experienced Locomotive Constructors all struggled with the construction process due to a lack of communication and synergy. The main issues participants experienced during the commercial kit observation session were related to the comprehension of the instructions. This led to errors being made, with some participants being unable to complete the construction task. Focus groups and iterative prototyping were employed to redefine the Product Design Specification (PDS) and incorporate refined instructional and design principles to simplify the construction process. The final design outcome comprised an instructional system which embedded information into the packaging. This meant that not only was the user instructed during construction but also supported them, building trust and confidence in the participant. The instructional system redesigned the kit set components to incorporate design principles for communication and engagement such as Constraints. These served to limit the options for user engagement therefore building affordances and reducing errors.. Additionally, the kit set packaging was redesigned and evolved from merely protecting components to incorporating instructional information and enabling assembly visualisation. The final design was validated through user testing sessions with participants across the four skill categories. The redesigned system demonstrated a significant reduction in errors and assembly time, as well as improved comprehension and user satisfaction. These findings confirm the effectiveness of incorporating instructional and design principles into the development of beginner-friendly model locomotive kits. Future research would further investigate the refinement of the final design and explore other concepts presented during the ideation and prototyping rounds.Item Open Access Synthesis and characterisation of poly glycerol sebacate bioelastomers.(2025) Mohsin, HammadContext: Surgical meshes have been used in multiple areas of the human body, especially to treat hernia defects. However they are controversial regarding postoperative complications. Research Motivation: Bioelastomers are not currently used in the commercial production of surgical meshes. Nonetheless they have potential advantages in that the mechanical properties of bioelastomers may provide mechanical stimuli that promote faster wound healing in soft tissue applications. Objectives: There is need to better characterise the bioelastomers, i.e. determine those properties that are important to biomedical engineering designers who are considering potential future implant applications. Results: This thesis describes the characterisation of one of the bioelastomers, specifically polyglycerol sebacate (PGS), by reporting on chemical structure, thermal behaviour (glass transition temperature, crystallisation temperature, shrinkage, decomposition temperature), and mechanical properties (Young’s Modulus, strength, elongation). The work also reports on the use profile for the material, i.e. the medical implications (hydroscopic properties), manufacturability of meshes, and the product design implications. Findings: Two synthesis routes were attempted, microwave and conventional inert gas and oven. This work e established that the microwave route is not currently reliable. Conventional synthesis for PGS using inert atmosphere and curing in a vacuum oven, was successful. Originality: This work makes the novel contribution of showing a comprehensive characterisation of poly glycerol sebacate, including the rheology which has not previous been reported in the literature. In addition, the work shows that the curing temperature of 140 oC as cited in the literature, resulted in a material with the correct chemical composition but showing different molecular segmental thermal behaviour. The work makes an additional contribution of offering a conceptual model of how the factors affect mesh infection, chronic pain, and hernia recurrence. Using this a risk assessment framework was developed. This quantifies risk of complications, based on the frequencies reported in the literature.Item Open Access A multimodal approach to investigate cognitive and neural mechanisms of construction hazard recognition : the roles of attention, situation awareness, and experience.(2025) Zhang, ZheThe construction industry is hazardous due to its high-risk nature, with many fatalities attributed to the failure of hazard recognition. A review of existing research in the construction reveals the following limitations: (1) Limited understanding of hazard recognition in dynamic environments, (2) Insufficient consideration of the combined impact on situation awareness (SA) transition and hazard recognition, (3) Subjective and static SA measurement methods, (4) Lack of consideration of temporal SA transitions, (5) Insufficient integration with cognitive psychology and neuroscience, and (6) Limited understanding of the role of technologies in enhancing SA. The research aims to investigate how the experience and interplay between endogenous (top-down) and exogenous (bottom-up) factors affect SA transitions and hazard recognition in dynamic virtual construction environments. The objectives are to (1) Identify key cognitive factors underlying construction hazard recognition and the mechanisms by which digital technologies enhance SA. (2) Investigate the effects of the interplay between bottom-up attention (B-U) and top-down attention (T-D) on hazard recognition. (3) Examine the effects of augmented stimuli and safety goal setting on SA transition (from Level 1 to Level 3 SA) and hazard recognition. (4) Compare the differences in SA, path selection, and hazard recognition between novice and experienced workers under different conditions of augmented stimuli and safety goal settings. A mixed-methods approach was employed, combining a systematic review, scientometric analysis, and experiment. Systematic review and scientometric analysis were employed to identify cognitive factors that influence hazard recognition in dynamic virtual construction sites. The experiment employed innovative technologies such as Immersive Virtual Reality (IVR), eye tracking, electroencephalography (EEG), and event-related potentials (ERP) to examine how three key factors - B-U, T-D, and SA - affect hazard recognition in dynamic virtual construction sites. The research found that augmented stimuli and safety goal setting significantly improve B-U, T-D, SA transition, and hazard recognition. Moreover, experienced workers demonstrated a superior ability to recognize hazards, characterized by faster response times, increased fixation on hazardous objects, and heightened activity in brain regions associated with SA. This research makes significant contributions to both construction site safety practices and academic literature by shifting traditional safety management approaches towards human-centered ones. This paradigm shift has the potential to revolutionize how researchers, government agencies, construction companies, and industries prioritize worker safety. Specifically, by transitioning from rigid protocols and regulations to a deeper understanding of cognitive processes, this work introduces a worker-centric approach that leverages digital technology to enhance hazard recognition. Future research should focus on developing multimodal assessment tools that integrate various neurophysiological measures to understand SA transitions in real-time. Understanding the neural mechanisms underlying SA processing is crucial for developing targeted interventions. Techniques such as functional magnetic resonance imaging or magnetoencephalography can be employed to identify subcortical brain regions involved in SA transitions. Research should also investigate SA transitions in complex environments with high workload, time pressure, or uncertainty, to develop more effective safety protocols. It is highly recommended that a worker-centric approach be adopted in safety management practices, where safety protocols and training programs are tailored to meet the specific needs, abilities, and experiences of individual workers.Item Open Access Measurement and modelling of calcium tartrate precipitation in wine.(2025) Muir, JackPrecipitates in wine are considered undesirable by the wine industry as consumers perceive them as indicators of poor wine quality. Calcium tartrate precipitation is difficult to predict and can occur months after bottling, which makes it hard for winemakers to prevent. The levels of calcium and tartrate are important for predicting the precipitation of calcium tartrate, but wine also contains many interacting components that will influence its formation and solubility in solution. Any attempts to model and predict the precipitation of calcium tartrate must account for these interactions. A system of equations for ion equilibria, electroneutrality, and conservation of mass was solved using Newton’s Method. This used activity coefficients calculated with the mean spherical approximation method, which provides good estimates for a wide range of concentrations and neutral molecules. The model accounted for the major species present in wine, including water, ethanol, the major organic acids, inorganic anions and cations, glucose, and fructose. Automatic pH adjustment was implemented to ensure the model correctly predicted the measured pH of the wine. Heun’s method was incorporated to solve precipitation rate equations and demonstrated how a solution with a low supersaturation could suddenly form crystals after a long period. Wines with and without crystals were analysed using high-performance liquid chromatography, ion chromatography, and microwave plasma atomic emission spectroscopy. The pH, ethanol, calcium, tartrate, and malate concentrations were individually compared for the wines that formed calcium tartrate and the wines that formed no crystals. This showed that there was no significant difference between the groups for any of these factors. However, the model was able to account for the interactions between the wine components and predicted significantly higher supersaturation ratios for wines that formed calcium tartrate. There was overlap between the groups but the model could still provide a useful indication of a wine’s risk level. A reduced model containing only water, ethanol, calcium, tartrate, malate, and lactate was tested to see how many compounds were required for an adequate prediction. This reduced model was still able to predict significantly higher supersaturation ratios for wines that formed calcium tartrate and would be more practical for winemakers to use. It was harder to do pH adjustment using the reduced model, so a sequential search was implemented to ensure that the code could solve automatically for every wine. A range of juices and wines were analysed to see if the model could be used early in the winemaking process to predict calcium tartrate formation. No relationship was found between the supersaturation ratio of the juice and the final wine, demonstrating that the model could only be used for prediction after most of the winemaking processes were finished. Ideally, the model would be used after cold stabilisation (when potassium bitartrate is precipitated) and close to the bottling of the wine. Some of the major factors that increase the risk of calcium tartrate precipitation are high pH, calcium, tartaric acid, ethanol, and low malic acid. These factors all interact, so no simple ranges can be given where there is no risk of precipitation. Winemakers can lower the likelihood of crystals forming by avoiding additives containing calcium, avoiding deacidification, considering acidification for high pH wines, creating a wine with a lower ethanol content, and avoiding malolactic fermentation.Item Open Access Machine learning for automated trading.(2025) Semple, WilliamThis thesis uses three machine learning algorithms to construct automated trading systems to predict the direction of share prices using technical analysis. Random forest, gradient boosting, and BART methods were chosen. We use a strategy of buying and selling stocks over 5-day windows over the span of the trading period, using some set of pre-determined shares. We considered the buy-hold strategy of buying all of the predetermined shares at the start of the trading period and selling them at the end of the period, to be the baseline against which our systems were measured. Each machine learning algorithm had their hyperparameters tuned, with gradient boosting using three different tuning algorithms. The final results are promising; each of the three methods resulted in greater returns and risk-adjusted returns than the buy-hold strategy. iItem Open Access Aerodynamic performance investigations of UAV propellers at low Reynolds number.(2025) Dougherty, SamThis thesis summarizes a comprehensive methodology for analysing low Reynolds number propellers using the Blade Element Momentum Theory (BEMT), aiming to predict these propellers’ aerodynamic performances more accurately in climbing and hovering conditions. Since many commercially available propellers lack published geometric data, a scanning method was developed first to extract this critical information. This method involved 3D laser scanning the propeller, meshing in Geomagic Wrap, and extracting chord length and pitch angle data in SolidWorks. The accuracy of this method was validated through visual/deviation analyses and comparison to manufacturers data. Then we outline the fundamental BEMT equations along with methodologies for aerodynamic polar generation, extensions, and general BEMT corrections. This detailed methodology allows general users to implement BEMT in their own software and produce more accurate propeller predictions. Three key contributions to BEMT accuracy were identified: 1) polar generation and extension, 2) general corrections, and 3) spanwise airfoil section/Reynolds number considerations. Suggestions on the optimisation of these elements is provided to improve model prediction accuracy. For this, polars were generated first using XFOIL software, with an 𝑁𝑐𝑟𝑖𝑡 value of 6 found to improve the agreement with the experimental data of Brandt. An extension method was then applied to the unstalled XFOIL polars to account for the blade elements reaching angles of attack (AOA) beyond the unstalled region. An investigation of the Viterna & Corrigan and AERODAS extension methods found them to be largely interchangeable, however AERODAS was favoured due to its simpler implementation. As far as the general BEMT corrections are concerned, the Prandtl tip-loss correction was found to improve the agreement by mitigating over-predictions of thrust in low advanced ratio conditions. The Prandtl-Glauert compressibility correction had no effect on performance prediction, as none of the propellers tested reached conditions that warranted application (𝑀𝑎>0.3). Reynolds number corrections had a minimal impact on performance predictions, but were computationally inexpensive, which justified their inclusion. The employment of a rotational correction improved the BEMT accuracy, where the Corrigan & Schilling’s stall delay model was found to provide a better agreement than Snel’s 3D correction model. However, using the stall delay model with the Viterna & Corrigan’s polar extension method was identified as a combination that could potentially lead to thrust over-predictions at low advanced ratios. Finally, considerations of the airfoil section and Reynolds number variation across the blade span with regards to polar inputs was analysed. Using a single airfoil at 75% blade span produced comparable results to considering airfoil section variation across blade span, but further testing would be required to test the necessity of considering spanwise airfoil shape variation on other propeller geometries. Considering only a single Reynolds number at the 75% span position was found to produce better agreement than considering Reynolds number variation along the blade. This finding is likely applicable to other propeller designs given that the approximate location of the average force on the blade is likely to be consistent across propeller geometries. It is worth noting that all interesting findings reported in this work are based on APC propellers, and so further validation is required for different propeller sizes and geometries. Regardless, the established methodology provides a solid foundation for future BEMT-based propeller analysis and optimisation.Item Open Access Enhancing Autonomous Sensory Meridian Response (ASMR) through personalised triggers in virtual reality.(2025) Ling, JiaxuanThis research investigates the effectiveness of personalised Autonomous Sensory Meridian Response (ASMR) triggers in enhancing users’ ASMR sensations within virtual reality (VR), addressing the central question: Do personalised triggers enhance ASMR sensations in VR? The study compares personalised and non-personalised triggers in a VR-based ASMR application across four dimensions: effectiveness, efficiency, duration, and subjective emotional responses. A mixed-methods approach with a primary focus on quantitative analysis was employed, supplemented by qualitative insights. Participants engaged with an immersive VR application featuring ASMR content and personalised elements. They completed an in-game task, followed by a self-report questionnaire to gather quantitative data, and participated in short semi-structured interviews to provide qualitative feedback. Quantitative results revealed that personalised triggers slightly improved ASMR tingling intensity, onset time, duration, and pleasantness, though enhancements were not necessarily significant. Qualitative results concurred with these findings, as participants preferred the personalised triggers due to its customised nature and calming properties. Distraction, absence of diversity in triggers, and minor design flaws were noted as well, pointing to areas for improvement. Other findings further revealed connections between emotional tendencies and ASMR responsiveness, i.e., the pre-experiment calmness level and the likelihood of musical frisson are positively correlated with ASMR tingling intensity. The study highlights the potential of personalisation in immersive ASMR applications through VR, demonstrating its tendency to enhance the overall ASMR experience while outlining a clear pathway for future optimisation.Item Open Access Automating the assessment of Level One NCEA Programming.(2025) Hickman, HenryThis thesis investigates the implementation of an automated high school programming assessment system, designed to assess the new programming Achievement Standard, AS92004. This is a nationally recognised standard, outlined by the New Zealand Qualifications Authority (NZQA) for New Zealand’s National Certificate of Educational Achievement (NCEA). The goal of automating assessment is to reduce teachers’ workload, and increase the equity and availability of programming education nationwide. This provides a case study of developing an automated assessment system that must conform to externally set criteria. We first explore automated programming assessment systems, and determine which assessment systems are suitable for assessing the achievement standard. This starts with an investigation of what tools teachers are currently using in the classroom, and what programming languages they are using to assess programming. This revealed that Python is the most common choice. We then investigate 23 different automated assessment systems by a literature search, and through teacher forum postings, to determine which may be suitable for assessing the standard. We find that several systems could be adapted to assess the standard, and ultimately proceed with CodeRunner, a Moodle plugin designed to automatically grade code. We develop a custom question type, and make some changes to CodeRunner, to better assess AS92004, referring to this modified version of CodeRuner as NCEA-CodeRunner. We then investigate assessment integrity, specifically through the use of randomisation techniques. We provide a taxonomy of randomisation techniques, and identify which contexts they are most relevant to based on the intended assessment scenario. We also discuss four different assessment scenarios, based on the combinations of formative and summative assessment, paired with invigilated and non-invigilated assessment. We find that randomisation techniques serve different purposes in each assessment scenario, based on the assessor’s goals, and use this information to choose randomisation techniques suited to AS92004. We then explore experienced teachers’ views of using NCEA-CodeRunner for assessment. We interview four experienced programming teachers to understand their thoughts. We perform an inductive coding of these interviews to identify relevant themes and sub-themes. From these four teachers, there was little agreement on what levels of context we should have in our questions, with each teacher having differing views for differing reasons. There was also a range of views on how to interpret some of the more ambiguous aspects of the standard, and how it should be marked. However, there was agreement that the way NCEA-CodeRunner assesses AS92004 was valid, and should be released to the wider teaching community. While NCEA-CodeRunner is designed to reduce teacher workload, and move away from manual grading, some aspects of the standard were difficult to fully automate. Therefore, after the initial release of the system, we analyse student submissions to determine if these aspects can be automated. First we investigate the effective use of conditions and control structures. We analyse 2,500 pieces of student work, at both a university and high school level, and manually grade whether their code effectively uses conditions and control structures. We then use pre-existing code quality metrics to determine if there is a correlation between a manual analysis of code, and an automated one. Finding no strong correlations, we turn to Machine Learning techniques. We develop and evaluate multiple models to assess whether students are effectively using conditions and control structures. We find that, while there are improvements to be made, no models can be implemented without introducing the risk of failing students who should pass. A second aspect of the standard that is challenging to automate is whether or not students are “documenting their code with comments that clarify the purpose of code sections”. We again perform a human analysis of high school students’ code submissions, and manually rate each comment. We find that students are over-documenting their code, and their behaviour in formative and summative assessment differs, with summative assessment having worse code comments. We then develop a prompt for GPT-4, and evaluate if it is a suitable way of assessing this aspect of the standard, ultimately finding it to not align with a manual analysis of code comments. Finally, NCEA-CodeRunner was used to assess AS92004 by over 30 teachers, assessing over 900 students, and we sought their feedback on the system. This was done via a survey of both teachers that used the system, and teachers that did not. We find a variety of reasons for not using the system, even amongst a small group of responses. However, many of them are willing to use NCEA-CodeRunner in future years. From teachers that did use our system, we find that they felt it was a fairer, less stressful, and less time consuming method of assessing AS92004. While there were some issues, such as moderation feedback, many were willing to use the system in the future. We also make suggestions for modifying the standard in future years, in order to make it more amenable to automated assessment, without losing the intent of the standard. From this, we appear to have achieved our goal of saving teachers’ scarce resource of time, and increasing the equity and availability of programming education nationwide.