Engineering: Theses and Dissertations
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Cognitive Biases and Perceptual Distortions in Human-Computer Interaction(2024) Suwanaposee, PangPeople interact with user interfaces daily to perform tasks such as web browsing, sending emails, watching videos and playing games. Many of their interactions will involve making judgements and decisions, such as judging the duration of software updates, the quality of video recommendations and deciding whether to examine word suggestions above a mobile keyboard. However, prior work from psychology and behavioural economics literature exhibits that people are susceptible to cognitive biases and perceptual distortions, implying that they can influence judgements and decisions in interaction. Examples of these effects relevant to this thesis include perceiving time as longer or shorter than it is, the Barnum effect (people’s tendency to assign higher quality ratings to personality descriptions presented as personalised for them than identical descriptions presented as generally true of people), and prob- ability weighting (the tendency to overweight small probabilities and underweight large probabilities). If cognitive biases and perceptual distortions apply in interaction, user interface designers must be aware of them, and helping designers understand them can lead to opportunities for improving user experiences. Therefore, the primary goal of this thesis is to provide novel knowledge and insights for improving research and practice in HCI by investigating the influence of cognitive biases and perceptual distortions in interaction. In addressing this goal, this thesis presents three empirical studies. Study 1 explored the influence of audio effects and attention on users’ perceived duration of interaction. This study was motivated by computer delays leading to user frustration, prior psychology literature of factors such as audio and attention influencing time perception, and the opportunity to extend from a previous HCI study demonstrating distortions in users’ duration judgements using audio effects. Study 1 had two stages: estimation of a wait duration (indicated with a progress bar) and pairwise comparisons between the duration of two interactive experiences with different effects. The first stage results revealed that perceived duration varied across different audio conditions, and the second stage results confirmed previous findings that increasing-tempo beeps could shorten the perceived duration but also extended these findings to interactions involving visual feedback (progress bar) and direct interaction (playing a game). These findings suggest that designers can use audio to alter users’ time perception during wait periods and reduce the adverse effects of computer delays. Study 2 examined the effect of purported personalisation on users’ perceived quality of system recommendations. This study was motivated by the prevalence of recommender systems in contemporary user interfaces and the widely observed Barnum effect. The experiment showed participants a set of identical movie recommendations that were purportedly personalised or non-personalised. Opposite to the Barnum effect, the results showed that personalised recommendations had lower mean quality scores but no significant difference. This result suggests that Barnum-like effects of personalisation have minimal influence on perceived quality and that designers should be cautious about depending on this effect to improve user experience. Study 3 investigated the probability weighting function influencing users’ decisions to examine or ignore text entry suggestions. The probability weighting function implies that users will exhibit a bias in which they overuse suggestions at low accuracy and underuse them at high accuracy, which can harm their text entry time. This study tested this prediction by having participants interact with five text entry suggestion systems, each with a unique probability of showing the participant their required word each time the suggestions were updated and examining users’ decisions to examine or ignore the suggestions. Experimental results confirmed the prediction, suggesting that designers should be wary of users exhibiting this bias and harming their performance. Moreover, as part of analysing users’ bias, this study contributes a method for analysing and modelling text entry to examine the costs and benefits of text entry suggestions. In summary, this thesis makes four primary contributions: (1) a review of prior work on a set of cognitive biases and perceptual distortions and their potential influence in interactive contexts; (2) an investigation of audio effects and attention influencing users’ perceived duration; (3) an investigation of the Barnum effect influencing users’ perceived quality of recommendations; and (4) an investigation of the probability weighting function in text entry suggestion interaction.Item Open Access Active confinement of reinforced concrete columns(2024) Rincón Gil, Julián DavidThere are too many reinforced concrete (RC) columns constructed before the mid-1970s without sufficient transverse reinforcement. By now, we understand quite well the critical role of transverse reinforcement in enabling a column to maintain its integrity under large displacement reversals in the nonlinear range of response. Poorly confined RC columns undergo a fast resistance decay due to the formation of criss-crossing inclined cracks, which can cause an abrupt failure or gradual disintegration and trigger collapse of the structure. Those columns need to be strengthened to increase their drift capacity. Although there are several alternatives to retrofit RC columns, they often require specialised workmanship and equipment, and involved installation procedures. This research examines an easy-to-design and easy-to implement retrofit and repair technique involving external post-tensioned clamps fastened around the column. Ten large-scale RC columns and six beams were strengthened with the proposed clamps. Test results suggest that the lateral prestress applied by the clamps 𝜎𝐿 increases the shear at inclined cracking and drift capacity. The unit shear stress at inclined cracking was observed to be nearly proportional to √(1 + 𝜎𝐿/𝑓𝑡 ), where 𝑓𝑡 is the tensile strength of the concrete. Test results from RC beams with clamps suggest that such increase may lead to similar increases in the shear stress at failure 𝑣𝑢. This observation implies that the fraction of shear strength attributed to the concrete (𝑣𝑐) is also proportional to √(1 + 𝜎𝐿/𝑓𝑡 ). Regarding column drift capacity, measured drift capacities of columns strengthened with clamps were compared with drift capacity estimates of similar columns with rectilinear ties. Comparisons were made using four closed-form equations and a Machine-Learning (ML) algorithm calibrated to estimate drift capacity of tied columns. The ratio of measured drift capacity to estimated drift capacity exceeded 1.0 when using the closed-form equations and approximately 1.3 when using the ML algorithm. Four other RC columns underwent testing with the proposed clamps applied as a repair measure. Repairs resulting in yielding of the longitudinal reinforcement and drift capacities exceeding what would be expected for a column with rectilinear ties met two criteria: a) the ratio of nominal shear resistance of the clamps 𝑣𝑠 to shear demand 𝑣𝑚𝑎𝑥 was larger than 0.6, and b) the measured lateral expansion of the concrete column core was smaller than 1%. Although measuring lateral expansion in the field can be challenging, a correlation between lateral expansion and maximum crack width was observed. Overall, the experimental tests showed that the proposed post-tensioned clamps can be effective as a retrofit and repair technique for non-ductile RC columns.Item Open Access A high-granularity, non-invasive, and low-cost method for quantifying panel radiator operation (occupant heating behaviour) in single-occupant office spaces.(2024) Andrade Beltrán, María IsabelSpace heating accounts for 36% of the total energy demand in buildings, with district heating systems using panel radiators satisfying a significant proportion of the global heating demand. Occupant operation of heating systems significantly impacts building energy use, especially when occupants control heating and cooling directly. Addressing the heating behaviour of occupants offers an opportunity for significant reductions in energy use through behavioural change that eliminates wasteful use of energy. A better understanding of the behavioural aspects of space heating energy use can provide opportunities to reduce energy consumption, meet carbon and operational cost reduction targets, and improve the comfort and productivity of building occupants. Behavioural interventions to reduce energy consumption are considered essential, but their effectiveness in achieving emission reduction targets is often compromised, mainly due to the lack of accurate energy consumption data at the individual occupant level. Despite significant advances in the assessment of occupant behaviour, the ongoing challenge is to develop highly granular, non-invasive and cost-effective methods that provide quantitative data on individual heating energy use. Existing methods for quantifying occupant heating behaviour, such as sensor-based methods, provide only aggregated data, and options such as flow meters or cameras are costly and intrusive. In the absence of accurate tools and metrics to quantify occupant heating behaviour, surveys are commonly used to collect data on space heating practices. While surveys are affordable and scalable, they suffer from inaccuracies due to discrepancies between reported and actual behaviour. An important knowledge gap exists in the understanding of occupant behaviour, particularly in the context of radiator-based district heating systems. Addressing this gap can improve the ability to design and test effective energy-related interventions. To overcome these limitations, this thesis introduces the Radiator Heating and Temperature Measurement (RHTM) technique, which quantifies the operation of panel radiators in single-occupancy offices using transient indoor air and radiator surface temperatures. The technique introduces two innovative metrics, Time-at-Temperature Difference (TTD) and Cumulative Temperature Difference (CTD). The TTD metric communicates a comprehensive characterisation of radiator operation through a bar graph, showing the duration of use at different temperature differences, which are indicators of the intensity of radiator operation. The CTD metric quantifies total radiator energy use over a period of time, allowing occupants to be categorised according to their individual heating energy use. Comparisons between occupants can be easily made using time series data by using the RHTM technique and analysing the TTD and CTD metrics, facilitating the inference and evaluation of heating behaviours. The new RHTM technique was applied to a case study covering a 13-working-day period, involving 30 office occupants. The technique effectively characterised radiator usage (reflective of occupant heating behaviour), enabling a rigorous comparison between occupants. This approach facilitated non-intrusive, high-granularity, and low-cost data acquisition for various purposes, including (1) assessing total energy consumption among users, (2) contrasting the duration of radiator usage among occupants, (3) comparing the time spent at different temperatures over the course of the study. The case study demonstrated the effectiveness of the RHTM technique in overcoming the challenges associated with monitoring radiator operation at the individual occupant level. CTD analysis efficiently facilitated the categorisation of occupants based on the total heating energy consumption during a certain period. The TTD bar graphs provided comprehensive insights into radiator operating times and temperature differences, revealing nuanced user interactions with the heating system. In particular, TTD served as a novel approach for pinpointing inefficient heating energy use, thus potentially supporting the design of targeted behavioural interventions. Consequently, the application of the RHTM technique allows for a more robust comparative analysis between users, providing unique information not captured by traditional methods such as energy meters, BMS, indoor temperature measurements, or surveys. Surveys are a commonly used method for assessing occupant behaviour in buildings, and the proposed technique can complement this widely used method and significantly enhance its insightfulness. The case study includes an examination of survey data in the light of quantitative measures obtained using the RHTM technique. The results revealed subtleties in occupants' heating behaviour that are often overlooked in conventional survey analysis. For example, scenarios were observed where survey responses on heating habits correlated with observed TTD data, while in other cases discrepancies were evident. These findings highlight the importance of using data-rich metrics such as CTD and TTD to improve survey-based assessments of occupant heating behaviour. Overall, the RHTM technique and the CTD and TTM metrics provide a strong basis for behavioural studies in intervention design and testing, providing a means of obtaining accurate, high granularity data on occupant heating behaviour that is lacking in traditional assessment methods. By providing this type of data, successful behavioural interventions can lead to behavioural changes that contribute to the optimisation of heating system operation and hence energy reduction in buildings. As a result, the RHTM technique for quantifying radiator operation advances knowledge of building energy performance and energy-related occupant behaviour.Item Open Access Improving urban food supply chain resilience using discrete-event simulations of local supply chains.(2024) Wight, Joshua D. D.Following a disaster, local supply chains are essential to food resilience. Strengthening these supply chains requires the ability to evaluate alternative interventions. However, we must improve our ability to compare how different investment decisions enable local supply chains to deliver sufficient food to retail stores following a disaster. This is of particular concern in the Hutt Valley, New Zealand. The approximately 150,000 residents potentially face 90 days without any road access to the valley in a Mw 7.5 earthquake of the Wellington Fault. The disaster will have major implications on delivering food resources to retail stores in the Hutt Valley, including disruptions to roads, electricity grids, and other critical infrastructure. It is important that residents can access fully functioning supermarkets with food stocks and electricity to run refrigeration and electronic transactions. Emergency managers have identified two potential options: deliver goods including food, generators, and fuel via a sea route on barges over Wellington Harbour or build new/upgrade current road(s). However, both options are highly dependent on good weather. Pre-positioned stock may also be considered if both options are insufficient for delivering food. However, decision-makers currently don’t have the ability to weigh these options or analyse the resources required to deliver sufficient food to the Hutt Valley. We will develop and use a Discrete-Event Simulation (DES) to address this. Using this model, we will explore different local supply chain interventions that strengthen local food distribution following a disaster. The quantitative output provides decision makers with the ability to better understand the risks, resources, uncertainty, and trade-offs of the different intervention options.Item Open Access Exploring indigenous knowledge in VR design: Incorporating kaupapa Māori to increase engagement when collaborating with a virtual agent.(2024) McNeill, Hēmi AteremuThis thesis explores the integration of kaupapa Māori principles into Virtual Reality (VR) environments, focusing on collaboration with virtual agents within Human Computer Interaction (HCI) research. Kaupapa Māori, as an indigenous knowledge system, and kaupapa Māori research, as an indigenous methodology, offer a cultural lens through which the design and implementation of VR technologies can enhance user engagement, foster inclusive design, and address the grand challenges in HCI identified by Stephanidis et al. (2019). The research is motivated by a gap in existing literature regarding the effects of incorporating kaupapa Māori into the design of Virtual Reality (VR) environments with collaborative virtual agents. Through a qualitative user study, this thesis investigates how kaupapa Māori interventions influence engagement with a virtual agent, and assesses the implications for inclusive design from a kaupapa Māori perspective. The study was structured around three primary research aims: 1) to investigate kaupapa Māori interventions in VR, 2) to use these interventions to prime engagement when collaborating with a virtual agent, and 3) to contribute to inclusive design principles informed by kaupapa Māori. Key findings reveal that kaupapa Māori interventions, particularly those inspired by pōwhiri (the Māori welcoming ceremony) and whakapapa (genealogy), can impact users’ sense of welcome and connection, and encourage engagement within the virtual environment. While the interventions aimed at priming engagement presented subtle effects, they underscored the complexity of designing meaningful interaction in VR. Furthermore, the study highlights the critical role of cultural perspectives and practices, as provided through wānanga, in the development of culturally informed inclusive design. The research contributes to Human Computer Interaction (HCI) by demonstrating the potential of indigenous knowledge systems to enrich digital environments, providing novel insights into influencing user engagement and motivation, and expanding the discourse on culturally informed technology design. This thesis advocates for the inclusion of kaupapa Māori and other indigenous epistemologies in the broader field of HCI, suggesting that such integration not only addresses current challenges within the discipline but also paves the way for more accessible, equitable, and culturally resonant technological innovations.Item Open Access Coordinated voltage control strategy in a low voltage distribution network.(2023) Acharya, ParashWith more Distributed Generations (DGs) and modern loads such as Electric Vehicles (EVs) installed in low-voltage networks, reactive power controllers will be required to maintain voltages at each grid point within acceptable limits. A reactive power controller connected at particular Installation Control Points (ICPs) will only maintain the voltage of the point to which it is connected. It is expected to have less effect on the other ICPs. A centralised reactive power controller within a distribution network could control the distributed sources to maintain the voltage of different ICPs within acceptable limits. This research will focus on developing a coordinated voltage control strategy through various DGs within the networks. Finally, the coordinated voltage control system will be compared with the distributed control system through various DGs. Load-flow studies are necessary to study the overall effect of a voltage controller within a dis- tribution network. A model of different power system components within a Low Voltage (LV) network was developed and used to study the voltage profile of the IEEE 13 bus system. A nodal admittance primitive (Yprim) matrix is formed for individual components, and finally, an overall Yprim matrix is developed for the network. The voltage profile of the IEEE 13 bus network is studied using the developed model of individual components. A comparison of these power-flow results was performed with the standard IEEE results. Similarly, a study of the voltage profile in a typical distribution network within New Zealand is performed. The impact of neutral voltage shift on the bus voltages is studied. Two networks were selected for the analysis, one representative of most city/commercial networks, while the other was selected randomly. Real half-hourly household load data was used to perform a power- flow analysis of these networks. A yearly power-flow analysis is performed using real half-hourly household load data for a typical city/commercial network of 46 ICPs, with 19 residential and 27 non-residential loads. Similarly, the neutral voltage shift analysis for an unbalanced load is performed for the randomly selected network. For the same network, the change in voltage in each phase due to the neutral voltage change is also analysed. This work clarifies how much of the voltage variation in an LV network is attributable to neutral point voltage shifts. Finally, voltage sensitivity analysis is performed for the typical city/commercial network using real load data for each ICP. Results show that with the same amount of injected reactive power, there is a higher voltage change at the nodes connected to the feeder with higher loads. A linearised model of VAR optimisation with the loss minimisation objective of a distribution network is developed. Loss sensitivity and voltage sensitivity matrices are derived, which are finally used in the VAR optimisation model while minimising the real power loss of the network. The network voltages are operated within the specified limit by injecting or absorbing the VAr from different parts of the network. This represents the randomly allocated DG inverters within the network, which can be used to absorb or inject reactive power. Finally, the optimisation results are presented for 46 bus typical city/commercial network representing the real distribution network within New Zealand. The result shows that the network can be operated within the specified voltage limits by injecting or absorbing the VAR from different parts of the network while minimising the losses.Item Open Access Feature-based and deep learning segmentation of RNAscope stained breast cancer tissues using whole slide images.(2024) Davidson, AndrewRNAscope staining of breast cancer tissue allows pathologists to deduce genetic characteristics of the cancer by inspection at the microscopic level, which can lead to better diagnosis and treatment. Chromogenic RNAscope staining is easy to fit into existing pathology workflows, but manually analysing the resulting tissue samples is time consuming. In addition, there is a lack of verified supporting methods for quantification and analysis. This thesis covers the development of methods to annotate and process image data from whole slide images of tissue microarrays, and then to accurately segment and quantify the RNAscope dots (each representing a single RNA transcript) from this tissue. We first developed a method to automatically find and label each core in a tissue microarray grid from a whole slide image, which achieved a precision of 99.63%. Following this, we created a dataset of 480x480 pixel breast cancer tissue patches, and had the RNAscope dots on these patches annotated by two experts using a custom QuPath script. This dataset provided both training data for development of further methods and sufficient data to produce a baseline expert inter-rater agreement score. The expert inter-rater agreement, as assessed by F1-score, was 0.596. Next, we investigated the usefulness of grey level texture features for automatically segmenting and classifying the positions of RNAscope dots from breast cancer tissue. Feature analysis showed that a small set of grey level features, including Grey Level Dependence Matrix and Neighbouring Grey Tone Difference Matrix features, were well suited for the task. This feature-based method performed similarly to expert annotators at identifying the positions of RNAscope dots, with an F1-score of 0.571. Finally, we developed and optimized a novel deep learning method focused on accurate segmentation of RNAscope dots from breast cancer tissue. The deep learning network is convolutional, using ConvNeXt as its backbone. The upscaling portions of the network use custom, heavily regularized blocks to prevent overfitting and early convergence on suboptimal solutions. The resulting network is modest in size for a segmentation network, and able to function well with little training data. This deep learning network was also able to outperform manual expert annotation at finding the positions of RNAscope dots, having a final F1-score of 0.745. This final score was obtained after in-depth analysis and optimization of multiple model parameters, and exploration of artificial data generation to improve training performance. The methods developed and analysed in this thesis provide an accurate, automated pipeline for quantification of RNAscope that could be integrated into the pathology workflow.Item Open Access Investigating the application of an extremophilic archaeon, thermococcus waiotapuensis WT1ᵀ, for the production of H₂ from dairy and brewery waste.(2024) McKay, ConnorHydrogen (H₂) is an ideal fuel that can be used to limit non-renewable energy dependence for a more sustainable society. Biological hydrogen production from industrial waste substrates can be used as a pathway to make hydrogen in a cost effective and eco-friendly nature [1]. The New Zealand dairy and brewery industries generate a considerable amount of proteinaceous waste by-products annually, consisting of dissolved air filtration (DAF) sludge, and spent yeast, hops, and grain respectively. Thermococcus waiotapuensis WT1ᵀ is a thermophilic sulphur-dependent archaeon that grows heterotrophically on complex proteinaceous substrates; and with a genomic potential suggesting near-maximal hydrogen yields, it may provide a sustainable utilisation strategy for these proteinaceous dairy and brewery by-products [2, 3, 4]. Thermophilic digestion of dairy DAF sludge reduces both the pathogen and organic load whilst producing hydrogen as a by-product in a single step of treatment. Similarly, fermentation of brewery by-products also results in biogas production. This goal of this Masters project was to determine whether waiotapuensis WT1ᵀ can be utilised to produce H₂ on substrates emulating dairy and brewery waste streams. To achieve this goal, the ability for T. waiotapuensis WT1ᵀ to utilise a range of growth substrates was determined before testing substrates simulating common waste streams of the brewery and dairy industries. Furthermore, the range of carbon, nitrogen, and sulphur compounds that support T. waiotapuensis WT1ᵀ growth were expanded on and compared to prior findings in the literature. Confirmed T. waiotapuensis WT1ᵀ end-products included acetate, succinate, CO₂ and H₂, with H₂S production detected but not confirmed via the strain’s growth on S0. No ethanol, formate, and lactate were produced on simulated substrates of both DAF sludge and spent yeast. These results suggest T. waiotapuensis WT1ᵀ is capable of near-maximal H₂ yields on common waste substrates of brewery and dairy industries; however, further studies are required to determine the stoichiometric production of H₂. Additionally, bioreactor growth of T. waiotapuensis WT1ᵀ was successfully achieved, scaling the T. waiotapuensis WT1ᵀ cultivation from 50 mL to 2.2 L, proving that T. waiotapuensis WT1ᵀ could be employed for industrial scaled-up growth. T. waiotapuensis WT1ᵀ growth was not impacted by sulphur or H₂ concentrations. DSMZ 934 medium sulphur supplementation could be reduced by a factor of 10 without any impact on T. waiotapuensis WT1ᵀ growth or headspace moles production. This would allow for lower sulphur supplementation, resulting in reduced input costs associated with industrial fermentation, and further supporting the increased industrial potential for T. waiotapuensis WT1ᵀ to utilise these waste substrates. The media dilution series found that by increasing media dilution, gaseous by-product production was limited, thus indicating that nutrients are the limiting factor for T. waiotapuensis WT1ᵀ growth. Carbon was indicated to be the limiting nutrient as the onset of the stationary phase coincided with the exhaustion of pyruvate. T. waiotapuensis WT1ᵀ was found to produce H₂ on the sulphur substrates of cystine, cysteine, thioglycolate, S⁰ and thiosulfate. S⁰ was determined to produce less H₂ compared to the other sulphur sources, indicating that S⁰ is not the optimal sulphur supplementation for H₂ production. The decreased H₂ production is likely due to the increased H₂S production as only media supplemented with S⁰ displayed an unconfirmed H₂S peak during GC analysis. T. waiotapuensis WT1ᵀ is hypothesised to have greater H₂S production than H₂ in the presence of S⁰, similar to P. furiosus Vc1 which produces H₂ and H₂S in a 40: 60 ratio in media with S⁰ [5]. The optimal sulphur source from those tested in this paper was found to be cysteine, however, further work should be completed to confirm this finding.Item Open Access Artificial intelligence technologies for emotion recognition : virtual therapy for ASD.(2024) Smith, JordanThis thesis presents a speech emotion recognition model, integrating it into a digital therapeutic tool for enhancing emotion recognition therapy in individuals with autism spectrum disorder. Motivated by the need for interactive and software-based therapeutic tools in the field of ASD therapy, this research endeavours to bridge the gap between new technological innovation and therapeutic application. The SER model uniquely combines a CNN spectral model and a linear regression prosodic model for accurate emotional arousal prediction from speech, validated using the IEMOCAP and MSP-IMPROV datasets. The mean ensembled model, merging spectral and prosodic features, showcased superior accuracies in key datasets: achieving a MAE of 0.52, PCC of 0.67, and CCC of 0.66 on the IEMOCAP Test Set, and a MAE of 0.44, PCC of 0.67, and CCC of 0.66 on the MSP-IMPROV Test Set, surpassing individual CNN and LR models. This underscores the ensemble’s robustness and its usefulness in ASD emotion recognition therapy. The project further developed a user-interactive digital therapeutic system, incorporating Soul Machines for the digital avatar and Google Dialogflow for natural language processing, operating on a local server to adapt conversational dynamics based on emotion predictions from the SER model. A GUI was also created to display emotion estimates in real-time and to provide flexibility for integrating additional emotion estimation models. This work sets the stage for real-time testing and the incorporation of ASD-specific therapy modules as areas for future enhancement. This research lays the groundwork for a comprehensive, technology-enabled therapeutic system for ASD, demonstrating the effectiveness of combining SER models with digital tools. It promises significant advancements in creating accessible, effective, and personalised therapeutic interventions, potentially enriching ASD individuals’ lives through cutting-edge technology.Item Open Access Investigate on how a simulated Cognitive Augmentation to detect deception, impact decision making confidence in negotiations(2024) Seneviratne, AmaliThis thesis explores the simulated integration of cognitive augmentation (CA) and augmented reality (AR) in deception detection within negotiation contexts. It assesses how AR visualizations of deception probabilities impact decision-making confidence and user acceptance. The study reveals that while visualization methods alone do not significantly alter confidence levels, a strong positive correlation exists between users' comfort with CA technologies and their decision-making confidence. This underscores the importance of user-centric design and familiarity with technology for effective CA implementations. The research also addresses public perceptions and ethical considerations, suggesting cautious optimism toward these technologies in high-stakes environments. Recommendations include enhancing algorithm accuracy, technical transparency, expanding interface designs, and developing ethical frameworks to support technology adoption.Item Open Access Novel 3D-Printed Catalysts for hydrogen peroxide-powered spaceflight(2023) Reid, SimonConcentrated hydrogen peroxide is a cheap, efficient, and relatively non-toxic rocket propellant, seen by many as the successor to hydrazine. However, catalysts used to induce hydrogen peroxide decomposition are often plagued by issues such as low melting point and high pressure drop. 3D-printing may offer solutions to these issues, through the realisation of complex geometries such as triply periodic minimal surfaces (TPMS). This thesis concerns the development of one such catalyst based on the Schoen Gyroid TPMS. Careful attention was paid to all aspects of the catalyst design, from the choice of the overall structure to the specific print technology, catalyst active phase, and support material. Optimal gyroid configurations were identified through a series of initial characterisation experiments. The pressure drop was measured for various 3D-printed polymer structures at Reynolds numbers of 100–4000 and correlated with a friction factor. Numerical residence time simulations were performed on a subset of the same geometries, and existing heat transfer data for the gyroid was reinterpreted via a pressure drop analogy. The results suggest that strut based gyroids are superior to sheet-based equivalents on account of their low friction factor, which enables a high geometric surface area for mass transfer. Rotation of the unit cells can also minimise the channelling phenomenon present in the default orientation. Numerical modelling was used to analyse the influence of the catalyst geometry. The model is capable of predicting the spatial variations in temperature, pressure and species mass fractions occurring within a catalyst bed during hydrogen peroxide decomposition. Previous assumptions regarding mass transfer limitations and interphase temperature differences were also shown to be incorrect, lending it improved predictive capabilities. Manufacturing of the ceramic catalyst supports proceeded via fused deposition modelling and digital light processing. In a novel approach, the printed supports were directly coated with the catalyst, instead of applying an intermediate washcoat layer. This was enabled by partial sintering of the polymer-bound ceramics to obtain a porous microstructure. While the resulting BET surface area was low compared to commercial catalysts, there was no penalty on the decomposition rate for Pt at low concentration, due to the presence of intraparticle mass transfer limitations. At equivalent surface area and loading, indicative decomposition rates were also faster for Pt compared to MnOx. The 3D-printed Pt/Al2O3 and Pt/cordierite catalysts all survived decomposition testing without damage to their internal structure from thermal shock. Fitting of the model to the test data also revealed the true influence of the catalyst geometry on the performance. Above a certain critical pressure drop, the gyroid structure has a lower pressure drop and higher decomposition capacity than any extruded catalyst with equivalent kinetics. Although spherical catalysts can have a higher overall capacity due to a lack of manufacturing constraints, their pressure drop is fixed, unlike the gyroid. Hydrogen peroxide thrusters will therefore benefit from the adoption of novel catalyst geometries, made possible by 3D-printing. Future work should now focus on improving the intrinsic kinetics and further optimising the catalyst geometry.Item Open Access Development of the electrochemical CO2 reduction reaction on copper based electrodes(2024) Heenan, Alexander ReynellThe detrimental consequences of global warming and climate change continue to accumulate and manifest through increased frequency of extreme weather events, increased mean temperatures, and rising sea levels. Global warming can be predominantly attributed to the evergrowing concentration of greenhouse gases in our atmosphere, which is a result of the combustion of fossil fuels. To decrease the atmospheric greenhouse gas concentration, energy use must not only be switched to renewable means such as wind, hydro, solar, and geothermal, but CO2 must also be captured and stored. The electrochemical CO2 reduction reaction (eCO2RR) has the potential to remove carbon from the atmosphere by utilising renewable energy, while producing value-added products for the chemical processing industry, such as carbon monoxide, formate, ethanol, acetate, and ethylene. While the eCO2RR has been studied with interest since the mid-1980s, the process tends to lack the efficiency, selectivity, activity, stability or a combination of these (depending on the catalyst of interest) for it to be commercially relevant. The work presented in this thesis focuses in particular on the eCO2RR using copper-based electrodes. Copper is unique as it is the only monometallic catalyst that is capable of reducing CO2 to products requiring more than two electrons, such as ethylene and ethanol, with reasonable faradaic efficiencies. Copper also has the benefit of being relatively active for the eCO2RR, however, it has poor stability and selectivity for a specific product, and tends to require high overpotentials, decreasing its energy efficiency. The literature review also uncovered some more specific areas of interest that are addressed in this thesis. Firstly, the literature review highlighted significant discrepancies in the selectivity of the eCO2RR on copper electrodes. It was proposed that these discrepancies could be due to several factors, such as the incorrect, or lack of use of iR compensation, variations in surface pretreatments of the copper, and variations in the metallurgical properties of the copper, such as grain size and grain orientation. By examining 72 random eCO2RR publications, it was found that 63% used no form of iR compensation (or at least they did not report that they used it). Only 22% publications used full iR compensation through either current interrupt or positive feedback correction coupled with post-experiment corrections. It was then shown that by using only partial, or no iR compensation, the actual electrode potential can be hundreds of millivolts from the desired electrode potential, resulting in significant current and selectivity variations. A novel piece of software was also written here to allow 100% iR compensation while using 80% PF correction, which regularly checks the actual electrode potential to update the applied potential until the actual electrode potential is the desired potential, once iR losses have been accounted for. Additionally, it was found that the method of surface pretreatment of the polycrystalline copper surface also had an impact on the activity and selectivity. While the electropolished copper had the lowest activity, it had significantly higher selectivity for CO compared to the mechanically roughened and polished electrodes, achieving up to 36.2% CO faradaic efficiency. As for the mechanically roughened sample, it was able to achieve up to 15% faradaic efficiency for ethylene, while the polished and electropolished samples achieved less than 2% over a wide potential range. This was suspected to be due to the variation in the specific surface area, distribution of exposed crystal facets, grain boundaries, defects, and undercoordinated sites. The effect of increasing interfacial pH favouring C2+ formation is a commonly discussed phenomenon in the literature, particularly in H-cell configurations. However, the effect of interfacial pH in a flowcell is something that has had little attention. It was found that by adjusting the electrolyte flowrate as well as the electrode length, the selectivity and activity for the eCO2RR could be optimised, and different selectivity regimes could be achieved. By maximising the electrode length, the mean interfacial pH increases due to the formation of OH− ions as a by-product of the eCO2RR. It could be thought that low electrolyte flowrates would be beneficial as this would increase the interfacial pH, however, this also resulted in decreased CO2 concentration at the electrode surface due to mass transfer limitations. It was found that electrolyte flowrate should be optimised to provide sufficient CO2 transport to the electrode surface, while allowing the interfacial pH to increase sufficiently to increase C2+ formation. It is often thought that subsurface oxygen in copper electrodes can enhance C2+ selectivity, however, some attribute the enhanced performance to increased surface roughness, which can increase the number of grain boundaries, edge sites and defective sites. The effect of surface roughness and subsurface oxygen tends to be difficult to decouple as inducing subsurface oxygen also causes surface roughness. To address this, ion implantation of neon and oxygen ions was used as it was hoped that the oxygen implantation would implant oxygen and create surface roughness, while the neon would only create surface roughness. Interestingly, it was found through XPS analysis that both the neon and oxygen ion implanted samples saw a decrease in the amount of oxides present and the number of oxygen vacancies, which was correlated to higher surface roughness, but worse electrochemical performance than the as-made electrode. This suggests that subsurface oxygen plays a far more crucial role in promoting eCO2RR performance, compared to surface roughness effects. Ionomers such as Nafion are commonly used as binders in catalyst layers to improve electrical connectivity and adherence to the support, however, they can also significantly impact the interfacial properties and the reaction environment of the catalyst. Here, it was found that by tuning the Nafion and the catalyst loading, the performance of a CuO nanoparticle catalyst could give selectivity similar to that of a gold electrode, with current efficiencies of close to 80% for CO at a current density of 197 mA cm−2 in a gas diffusion electrode setup. Through in-situ and ex-situ XAS experiments, it was also found that the Nafion ionomer can inhibit the reduction kinetics of the CuO during electrolysis.Item Open Access Computer simulation and controllability studies of multi-module ultrafiltration plants(1996) Winchester, JamesThe operation and control of whey protein concentrating ultrafiltration plants seems, at times, to be difficult. These problems can cause plants to be less efficient due to losses in product quality and throughput. With the general rise in the size of the dairy industry and the subsequent increase in whey protein concentrate production it would seem important to solve these operation and control problems. This thesis presents work that has been done investigating these problems. The work can be divided into three areas. Firstly the development of suitable models for the description of UF whey concentrating plants. Secondly the use of these models to study the operation and control of UF plants in order to solve the problems mentioned above. Thirdly the study of the UF plant process from a state space perspective in order to determine the underlying reasons for the results found. The model developed for the simulation of UF plants was found to be qualitatively correct and applicable for the modelling of these plants. It showed that the control of UF plants is very closely linked with their operation. The main effect being the diafiltration ratios used in the operation of the plant. The results found, for the operation and control of UF plants, where confirmed and explained using state space analysis. The conclusions for the operation and control of UF plants are listed in section 5. 5 of chapter. This section shows an itemised list of the conclusions of chapter 5 which contains all the conclusions for the operation and control of UF plants.Item Open Access Mathematical aspects of phylogenetic diversity measures(2024) Manson, KerryPhylogenetic diversity (PD) is a popular measure of biodiversity, with particular applications to conservation management. It brings a focus to the evolutionary relationships between species that is missing in simpler approaches, such as the use of species richness. PD attains this focus by considering species in terms of their positions on a phylogenetic tree. The mathematical properties of PD, and a suite of methods derived from it, have been studied since its introduction in the early 1990’s. In this thesis, we explore these properties further, covering three main aspects of PD-related studies. The first strand of the thesis covers the study and comparison of the PD values of sets of a fixed size. We use combinatorial and algorithmic approaches to understand those sets of species that obtain the extreme PD scores for sets of their size. A combinatorial characterisation of maximum PD sets is provided. This leads to a polynomial-time algorithm for calculating the number of maximum PD sets of each size by applying a generating function. We then use this characterisation to maximise a linear function on the leaves of a phylogenetic tree, subject to the solution being a maximum PD set. Additionally, dynamic programming is used to find solutions to the dual problem, determining minimum PD sets of each size. The second strand involves phylogenetic diversity indices, a type of function that partitions the PD of a set of species among its constituent members. We give a formal definition of this class of function, and investigate the properties of functions in this class. This process is aided by a description of diversity indices as points within a convex space, whose dimension and extremal points we describe. Particularly, we show that rankings derived from these measures are susceptible to being disrupted by the extinction of some of the species being measured. We introduce a number of new measures that avoid this disruption to a greater extent than existing approaches. The third strand deals with the link between PD and feature diversity (FD), another means of measuring biodiversity. We provide models for evolution of features on phylogenetic trees that account for loss of features, such as the loss of flight in some bird species. Doing so leads to results showing that PD is an imperfect proxy for FD unless feature loss is (unrealistically) ignored. We show how our new measure, EvoHeritage, spans a continuum that connects PD and SR at the extremes, based on the rate of assumed feature loss. The distinct parts of this thesis are linked by an aim to better understand what is meant by the concept of biodiversity and to investigate how that understanding is reflected in the way that we measure this idea. We provide a mathematical approach, complemented by a number of algorithms that enable these ideas to be put into practice.Item Open Access Enhancing approaches to glycaemic modelling and parameter identification.(2023) McHugh, Alexander DeclanDiabetes mellitus is a metabolic disease involving degradation of the body’s endogenous mechanisms for regulating concentration of glucose in blood plasma. In a healthy body, the pancreas produces insulin, a hormone which facilitates uptake of glucose into cells for use as energy. Where the pancreas’s insulin-producing cells are damaged (type 1 diabetes), or the body becomes severely resistant to insulin (type 2 diabetes), glucose plasma concentrations can reach harmful levels, a state labelled hyperglycaemia. Complications arising from diabetes and hyperglycaemia can cause organ failure, neuropathy, and, in severe cases, death. Diabetes treatment presents a significant health, social, and economic cost. In 2021, its total cost to New Zealand was NZ$2.1 billion in 2021, equivalent to 0.67% of gross domestic product for that year, a figure which is forecast to grow. Causes of diabetes include a variety of genetic and lifestyle factors, but its increasing prevalence is primarily due to increasingly energy-rich diets combined with more sedentary lifestyles in recent decades. Many treatment approaches and diagnostic tests have been developed to assess and manage diabetes and its common co-morbidities. Treatment of fully progressed diabetes often involves insulin therapy, where analogues of human insulin are externally administered to replace or supplement poor endogenous action. Primarily, diagnostic tests aim to measure some metabolic aspect of an individual’s glycaemic system and coincide this with a development of diabetes or its precursors. Specific diagnostic metrics of interest include insulin sensitivity, how much plasma glucose levels change per unit of insulin; and endogenous secretion, the rate at which the pancreas produces insulin. Developments in computation over the last 50 years have enabled creation of computer-simulated numerical models to represent human physiology. Glycaemic models such as the Intensive Control Insulin-NutritionGlucose (ICING) model use a compartment model to numerically simulate concentrations of glucose, insulin, and other quantities in various components of the overall glycaemic system. This model is well-validated in many experimental, clinical, and in-silico analyses, where it has been used for glycaemic control, assessments of insulin sensitivity and kinetics, and other metabolic treatments. However, it still presents various opportunities for improvement, especially regarding simplified modelling assumptions and identifiability of its parameters. This thesis explores various modelling and identifiability aspects of these models, and potential applications to novel phenomena in glycaemic control and diabetes care. First, the existing ICING model is applied to two glucose tolerance test (GTT) trials, one with a small 1 U insulin modification and one with no insulin modification. Primarily, the trials were compared for the practical identifiability of insulin kinetic parameters representing hepatic clearances from plasma. Where a modelling approach is highly practically identifiable, clinical data provides unique, physiologically valid parameter values which produce computation simulations accurate to measurements. Where practical identifiability is poor, many combinations of parameter values, including non-physiological values, can provide an equivalently optimal fit to data, limiting the relevance and conclusions of the modelling approach. Identifiability analysis shows the trial with insulin modification yielded a domain of parameter values providing equivalently optimal fits which was 4.7 times smaller than the domain generated by the non-modified trial. This outcome shows insulin modification improves accurate assessment of subject-specific insulin kinetics and simulation, and suggests modification should be included in metabolic tests where accurate assessment of kinetics is a priority. Mathematical analysis of model equations, combined with practical numerical analysis, suggests parameter identifiability is improved where time profiles of parameter coefficients are distinct from each other and from other model equation terms. To this end, potential modelling benefits are posed by the expansion of previously constant parameters into more complex profiles which may more completely represent their physiological action. A parameter representing hepatic clearance rate was re-structured as a time-varying sum of mathematical basis splines to explore this approach, where parameter identification techniques identify optimal weightings of individuals splines in the structure. To maintain physiological validity, this identification was constrained based on literature analysis of changes in hepatic clearance rate over time, and of correlation between increasing hepatic clearance rate and decreasing glucose concentration. This more complex model achieves better outcome insulin fits at the expense of greater computation and constraint requirements. Improvements to existing parameter structure and identification approaches enable novel model expansions. Specifically, a potential loss dynamic reported for subcutaneous jet-injection insulin delivery can be modelled and identified by an expanded model structure, where this loss is not directly measurable nor quantified in a clinical setting. Thus, a new parameter is added to the existing model to represent a proportion of nominal insulin delivery lost at injection site. Where this dynamic possesses similar action to existing identified parameters, the enhanced time-varying structure allows robust per-trial identification of this novel parameter while maintaining or improving insulin simulation accuracy. This approach identified loss values of up to approximately 20% of a nominal 2 U dose in some patients. This identified loss proportion is consistent over a range of parameter identification protocols and values, demonstrating the robustness of this identification. Additionally, identified insulin sensitivity is shown not to vary significantly with identification of this loss factor, further validating that factor’s inclusion in the model and identification approach. Where modelling considered this potential loss and identified a loss of 5% or above, insulin fit accuracy is improved compared to the original case. Overall, this loss factor is shown to be present and quantifiable, and the model is shown to be expandable to novel loss dynamics without compromising fit accuracy or parameter identifiability. The time-varying basis spline approach to modelling hepatic clearance is further justified and strengthened with an appropriate sensitivity analysis. While previous analysis considers the general improvement gained from time-varying parameter structures, an optimal configuration of spline quantity, placement, and polynomial order is desired. For the analysed data, 20 equidistant splines of 2nd or 3rd order yielded equally optimal results and reduced RMS error of insulin simulations by up to 29% relative to a constant-value parameter model. Importantly, these modelling improvements required no change to measurement or trial protocol and impacted identified insulin sensitivity by less than 0.05% at the optimum configuration. These results further validate the analysis of subcutaneous jet injection loss, finding a greater loss was identified with a more accurate timevarying clearance model, and this loss was invariant to changes in configuration. These outcomes, while related to a single dataset and modelling case, demonstrate a methodology to justify and analyse model modifications. Overall, these analyses present various areas of improvement and development for glycaemic modelling. More accurate model predictions and parameter identification yields more precise assessment of diagnostic parameters and individual diabetic pathogenesis. Furthermore, the model’s expandability and its robustness in parameter identifiability allow novel treatments to be explored, analysed, and justified using numerical modelling techniques. While error, variability, and parameter trade-off will always pose challenges to modelling, the analyses and methodologies in this thesis ensure relevant and accurate modelling outcomes.Item Open Access Utilizing augmented reality for attention guidance in outdoor cultural heritage storytelling.(2024) Ren, RenCultural heritage institutions have a tradition of storytelling. Storytelling is often conducted through audio guides that require visitors to pay attention to details that are essential for understanding the cultural and historical significance of cultural heritage. However, guiding attention to these details using solely audio can be challenging. While various modalities such as physical, auditory, tactile and social have been explored, they each present limitations, particularly in the complex environment of outdoor cultural heritage sites. This study explores an alternative method, specifically Augmented Reality (AR), to address these limitations. Through a user study conducted at the Kate Sheppard House involving 30 participants, and employing a within-subjects design, we compared two AR guidance techniques (Virtual Arrow and Green Laser) against the traditional Audio Only in terms of response time, knowledge retention and user preference. The study results indicate that, although there was no statistical difference in the objective data, trends suggest a potential advantage of AR in enhancing visitors’ attention. By integrating subjective feedback with observations, we offer implications for designers of outdoor cultural heritage sites and suggest directions for future research.Item Open Access Creep ratchetting of Centralloy® G 4852 Micro-R reformer tube alloy.(2024) Caughey, MackenzieCreep ratchetting is the accelerated damage experienced by a component under high-temperature (>0.4 Tm) creep conditions and fluctuating stresses. In the methanol production process, thick-walled (~15 mm) steammethane reformers consist of hundreds of vertical tubes operating at temperatures up to 950 °C and internal pressures of 2000-3500 kPa. Operating in creep conditions, these tubes are often subjected to fluctuating stresses in the form of plant shutdowns, local variations in temperature, and through-wall temperature gradients. As such, premature failure of reformer tubes has been experienced and is of particular interest to Methanex Ltd. NZ. In the current work, a brief analysis of the as-cast and aged microstructures of Centralloy® G 4852 Micro-R reformer tube alloy was completed. Scanning electron microscopy (SEM) and energy dispersive x-ray spectroscopy (EDS) techniques revealed as-cast microstructural phases consistent with those found in microalloyed reformer tube materials in industry and literature. Analysis of a macro-etched tube cross section revealed a 100% composition of columnar grains, indicating the removal of the equiaxed region during manufacturing or machining had occurred. Analysis of aged samples at 1050 °C for 1000 and 5000 hours revealed coarsening of the primary carbide network. Extensive precipitation and coarsening of smaller cube or needle shaped secondary carbides within the austenite matrix was observed. No clear difference in the structure, density, or distribution of precipitates was observed between the 1000-hour and 5000-hour aged samples. As-cast specimens of Centralloy® G 4852 Micro-R were subjected to creep and creep ratchetting (C-CR) testing at 975 °C, stresses of 30, 36, and 42 MPa, and a ratchetting ratio (dwell to hold) of 1:6. At 30 and 36 MPa stress levels, creep ratchetting tests were completed at various dwell times (20, 30, 40 minutes), with a secondary stress of 6 MPa. The 42 MPa creep ratchetting tests were completed with a dwell time of 30 minutes, and varying secondary stresses (2, 4, 6 MPa). These two testing regimes were used to identify the relative effect of elevated stresses in a dwell period on the performance of Centralloy® G 4852 Micro-R. Post processing and interpretation of this data has been completed. Further analysis of the collected data was completed using Robinson’s rule, Larson-Miller data, strain rate equations, the Omega method, the Gurson-Tvergaard-Needleman (GTN) model, and a proposed non-linear algebraic (NLA) equation. Finally, an assessment of the effectiveness of each technique on the modelling of creep ratchetting behaviour is provided.Item Open Access Energy transition of dairy agriculture : scenario analysis and system concept engineering - with case study in Canterbury, New Zealand.(2024) Murphy, Samuel JamesAgriculture accounts for 49.2% of gross Greenhouse gas (GHG) emissions in New Zealand (NZ). Methane from enteric fermentation makes up 37% of New Zealand's gross emissions, with dairy cattle contributing significantly at 22.7% [1]. In the last three decades, there has been significant increases in both dairy cow numbers and synthetic nitrogen fertiliser use in the Canterbury region. In 1992 there were 50,000 dairy cows in Canterbury and 3.2kT nitrogen fertiliser used on dairy farms, in 2019 there were 1.2m dairy cows (including milking cows and dry cows) and 57kT of nitrogen fertiliser was used. The Canterbury dairy agriculture system must urgently reduce it’s GHG emissions in line with the 1.5ºC failure limit for global warming carbon while staying within environmental limits and continuing to provide nutrition for growing global population. This thesis explores opportunities to decarbonise dairy agriculture in Canterbury. It couples transition engineering methods with farm system modeling to evaluate the performance of theoretical farm systems with a variety of GHG mitigation strategies in place. Defining the essential activity of the Canterbury agriculture sector as providing nutrition, rather than necessarily producing milk solids allows us to explore scenarios where dairy production is downshifted and nutrients are provided by alternative food crops. The paper begins with a review of literature related to greenhouse gas mitigation strategies in global agriculture and New Zealand dairy agriculture, farm system modeling, regulation of NZ agriculture and environmental harms associated with dairy farming. Chapter 3 describes how transition engineering methods were used to aid with problem definition and development of research objectives. Chapter 4 characterises the present-day Canterbury dairy agriculture system in terms of production, financial performance and environmental performance. Chapter 5 presents a brief history of dairy agriculture in Canterbury. Chapter 6 describes scenario selection, farm system modeling results. Chapter 7 explores opportunities to transition to a low emissions Canterbury agriculture sector. Finally, the paper ends in Chapter 8 with final conclusions and recommendations for further research. The results suggest that there is no technology solution that achieves the deep emissions reduction required without decreasing dairy production. There will need to be a significant reduction of the dairy herd in Canterbury, coupled with increased production of plant-based protein sources. Synthetic nitrogen fertiliser must also be significantly reduced, along with an increase organic farming and other practices that improve soil health, biodiversity and ecological outcomes. Increasing production of plant based crops significantly increases food production per hectare and significantly decreases GHG emissions both in terms of perhectare of farm area and food production but with a significant decrease in profitability. The most profitable wheat scenario was organic wheat with electrified farm vehicles, and transport. With a carbon price of $165.40 the Organic and Electric Wheat scenario becomes as profitable as the business as usual (BAU) scenario with a high proportion of palm kernel extract purchased as supplementary feed. An organic mixed farm system where cow numbers are reduced by 75% decrease emissions by 74%, with only a 25% reduction in profitability compared with the BAU scenario.Item Open Access A study on audience experience of diegetic portals as a scene transition device in cinematic virtual reality.(2024) Adams, EleanorThis thesis seeks to explore the use of portals applied in cinematic virtual reality (CVR) as a scene transition method, and to outline some key characteristics of a portal in this context. A 15 minute CVR prototype experience featuring portal transitions, triggered through interactions with a portkey object was developed in the Unity game engine. A user study was conducted with this prototype with the aim to qualitatively uncover dominant subjective perspectives of the portal transition in the CVR context. Two dominant social perspectives, that were termed Voyagers and Observers emerged from this sample. The two perspectives are distinct collective viewpoints, and each participant was sorted into one of these groups depending on the answers they gave in the user study. There was distinguishing opinions between the two collective groups, but also some opinions that were similar between both groups. The interpreted results provide a flexible best practice guideline, with four design principles for portal transitions in CVR narratives. This may be utilised by future directors and content producers of CVR experiences, as well as providing insight for researchers in the field of interactive digital narratives, and cinematic virtual reality.Item Open Access Comparison of macroscopic and microscopic modelling for evaluating bus service reliability(2023) Khorasani, GholamrezaPublic transportation plays a crucial role in people’s lives. High-quality public transportation is essential for individual well-being and promotes economic growth and productivity. Importantly, a reliable and efficient bus service can encourage people to opt for public transport over private vehicles, aiding transport authorities in their efforts to reduce road congestion. Bus service reliability is a pivotal factor in determining service quality for passengers. Given bus services' inherent complexity and stochastic behaviour, understanding the factors contributing to service unreliability is of utmost importance for transport authorities. This study adopts a before-and-after approach, focusing on a specific corridor in Christchurch known as Riccarton Road. It reviews the literature on several types of bus performance indicators and employs the coefficient of Variation (CoV) to assess the level of improvement in bus service components along this corridor before and after the implementation of bus priority changes. Additionally, the study investigates the presence of a correlation between dwell times at successive bus stops, link times at successive links, and the correlation between dwell times and link times. The significance of modelling for transport authorities in this context is invaluable. Accurate models can immensely benefit transport authorities in the planning and monitoring bus services, allowing them to identify bottlenecks, assess the impact of various interventions like bus priority measures, and allocate resources more efficiently. By employing both macro-simulation and micro-simulation models, this study aims to equip transport authorities with the tools necessary for maintaining and improving service reliability. Various sources of variation have been identified as contributory factors in bus service performance. Historically, macro-simulation models in existing studies have often ignored link time and dwell time distributions. Furthermore, no study has considered the correlation between link times and dwell times when assessing bus service reliability. This study leverages data from before and after the implementation of bus priority measures to investigate these overlooked aspects. It explores the correlation between various components of bus travel time and how accounting for these correlations can enhance the accuracy of macro-simulation models. The micro-simulation model developed in this study goes further by considering the interaction between buses and other vehicles at bus stops. This helps in comparing the reliability of bus services with and without bus priority measures at stops. The study concludes by offering a comprehensive comparison between micro-simulation and macro- simulation models. This comparison covers various aspects, from the accuracy of the models to the level of effort required to develop them, the skills needed, the level of detail each model offers, and the amount of data each requires. It also delves into their respective applications in the planning and monitoring of bus services, providing a holistic view for transport authorities to make informed decisions.