Repository logo


University of Saskatchewan's Repository for Research, Scholarship, and Artistic Work

Welcome to HARVEST, the repository for research, scholarship, and artistic work created by the University of Saskatchewan community. Browse our collections below or find out more and submit your work.


Recent Submissions

(2024-04-19) Baron, Kaitlyn; Sumner, David; Bergstrom, Donald; Mazurek, Kerry
A surface-mounted flat plate is an important fundamental shape in fluid dynamics, due to the interaction of the plate with the boundary layer on the surface (or ground plane), and the vortex structures produced in the wake. The plate acts as a bluff body when oriented normal to the flow. When angled with respect to the flow, a low-aspect-ratio flat plate may be used as a vortex generator. In this thesis, the flow over a surface-mounted flat plate is examined experimentally in a low-speed wind tunnel using a seven-hole pressure probe and particle image velocimetry, with the aim of better understanding the behaviour of the flow in the near-wake and the transition between the plate acting as a bluff body and streamlined body. Comparisons were also made to surface-mounted prisms to investigate the effect of afterbody on the flow. The wake region for four low-aspect-ratio rectangular flat plates oriented normal to the flow with aspect (height-to-width) ratios of AR = 0.2, 0.35, 0.5, and 1 was investigated using a seven-hole pressure probe and particle image velocimetry. Additionally, the far-wake region for AR = 0.2 was investigated using a seven-hole pressure probe for β = 10 - 90° (angled to normal to the flow). All tests were conducted with a boundary layer thickness to plate width ratio of δ/D = 0.6 and a Reynolds number of ReD = 7.5×10^4. For AR = 0.2 to 0.5, there were two streamwise vortex pairs in the wake, with varying patterns of upwash and downwash. Additionally, in the vertical symmetry plane, these three aspect ratios had a saddle point near the downstream edge of the recirculation zone that defines the flow field and leads to regions of upward and downward directed flow at the edge of the mean recirculation zone. In contrast, for AR = 1, there is only downwash on the wake centerline, a single pair of streamwise vorticity concentrations, and the saddle point is absent in the vertical symmetry plane. As β varies for AR = 0.2, three distinct flow regimes can be identified. The plate acts as a streamlined body with a dominant tip vortex for regime I (β = 10 to 45°) and acts as a bluff body with two counter-rotating vortex pairs for regime III (β = 80 to 90°). Regime II is the transition regime (β = 60 to 75°), where the tip vortex expands and deforms into the two vortex pairs. The flow around a square prism is similar to that of a flat plate, with many of the same critical points and vortex structures present. However, for the same AR, a prism’s recirculation zone length is smaller than a flat plate’s due to the downwards momentum provided by the free end vortex atop the prism.
Through the Fire and Flames: Addressing Challenges in Wildfire Debris Analysis
(2024-04-19) Boegelsack, Nadin; McMartin, Dena W; O'Sullivan, Gwen; Hawkes, Christopher; Withey, Jonathan M; Abdelrasoul, Amira; McPhedran, Kerry; Forbes, Shari
Wildfires are a global concern with increasing occurrence and areas burned. Associated costs do not only apply to individual fires, such as the estimated 10 billion US dollars for the 2016 Fort McMurray fire, but also to fire prevention efforts as over 50% of wildfires are caused by human activity. Arson, which is the deliberate or reckless starting of a fire resulting in ecosystem and infrastructure damages or loss of life, is a major crime. Forensic analysis of fire debris plays a key role in determining the cause and origin of a fire, requiring meticulous analysis of evidence to determine if the fire was started intentionally. One vital aspect of these investigations involves the detection and identification of Ignitable Liquid Residue (ILR), the remnants of accelerants. Arson cases have one of the lowest conviction rates for major crimes with a large percentage of unsolved cases since the presence of trace levels of ILR in fire debris samples are frequently obscured by abundant matrix compounds with similar physico-chemical attributes. Three main challenges were identified in conventional ILR analysis in the context of wildfires: Interferences coupled with matrix composition and relative signal abundance of ILR target compounds, classification according to the American Society for Testing and Materials (ASTM) standard, and potential for cross-contamination stemming from long transport and storage times. The application of design of experiment (DoE) principles to method development of flow-modulated comprehensive multidimensional gas chromatography coupled with time-of-flight mass spectrometry (GC×GC-ToF MS) for ILR analysis to address interference-related issues are discussed. Both, hardware (column selection and modulator settings) and GC run parameters (oven programming, inlet pressure, and modulation period) were optimized. For hardware, carbon loading potential, dilution effect, target peak amplitude and skewing effect were evaluated. GC run parameters optimization compared DoE approaches (Box-Behnken and Doehlert designs) to assess sensitivity, selectivity, peak capacity, and wraparound; alongside target peak retention, resolution, and shape evaluation. Sample alignment is addressed via the development of retention time indices (RI) to facilitate ASTM classification and resulting advantages for sample comparison across arson cases and laboratories. Using a combination of two well-established GC RI systems, non-isothermal Kovats index and Lee index, certified standards and simulated wildfire debris are used in the performance verification for ILR classification prior to performance validation on wildfire scene samples. Lastly, the thesis explores the importance of sample integrity by employing targeted and untargeted chemometric analysis to detect and characterize cross-contamination for various sample matrices in a controlled environment. It investigates the potential for associated false positives, with an outlook on potential quantitative analysis and source appointment for future development, before analysing the impacts of different packaging and storage methods. Both DoE models operated in the optimal zone after hardware optimisation. The final method developed from this research separated all target compounds successfully from varied matrix compositions without wraparound for compounds with at least four aromatic rings. During validation, remaining co-elutions could be resolved with a deconvolution algorithm. For alignment, the developed RI system showed very good correlations to predicted values (r2 = 0.97 in first dimension, r2 = 0.99 in second dimension) and was valid for a wide range of analyte concentrations and operational settings (coefficient of variance (CV) < 1% in first dimension, < 10% in second dimension). It resulted in 86% coverage of total chromatogram area, leading to the successful development of an ILR contour map. The cross-contamination investigation showed a notable potential for false positive identification with gasoline compound transmission detectable after a 1-hour exposure, and a full profile transfer after 8-hour exposure. Matrix interaction effects were observed in the form of inherent native compound interference as well as adsorbate-adsorbate interaction during transmission and extraction. Chemometric analysis allowed for distinction between negative, positive, and contaminated samples with classification confidences of 88% for targeted and 93% for untargeted to 95% for diagnostic ratio analysis of three ratios deployed in tandem. Employing packaging reduced the extent of cross-contamination by varying degrees. Nylon-based packaging performed better than polyethylene-based packaging since the material itself emitted interfering compounds. Heat-sealing was the most reliable sealing mechanisms, and refrigerated storage offered several advantages. While double-packaging is a recommended practice, triple-packaging did not show significant benefits. This thesis successfully developed several tools for ILR analysis, including a GC×GC-ToF MS method which fulfills and exceeds the current ASTM requirements, an ILR classification contour map utilising the roof-tiling effect of the developed method, and chemometric tools to evaluate cross-contamination. The implications for environmental and civil engineering performance are numerous, particularly as relating to the paramount safety of the public and environment in the conduct of engineering projects, particularly those in increasingly vulnerable wildfire and arson-associated regions.
(2024-04-19) Asamoah, Isaac Dante; Srinivasan, Raj; Soteros, Chris; Soteros, Chris; Lui, Juxin; Khan, Shahedul
In this thesis, we review and explore the stochastic models of epidemics developed by researchers in recent years. These stochastic models encompass both discrete and continuous time Markov chain models, particularly emphasizing stochastic Susceptible Infected Susceptible (SIS) and Susceptible Infected Recovered (SIR) models. These models are compared with their deterministic counterparts regarding dynamics, behavior, and outcomes, assuming a constant population size. The comparison involves quantitative and qualitative analyses, focusing on the asymptotic dynamics, the mean of the stochastic process versus the deterministic solution and the differing properties, such as the final size of an epidemic, particularly for when the basic reproduction number, R0, exceeds 1. Significant observations include the bimodal nature of probability distributions in stochastic models when the basic reproduction number, R0, exceeds 1. The two modes correspond respectively to disease elimination and disease persistence. This highlights the qualitative differences in the asymptotic dynamics between deterministic and stochastic models. The occurrence of disease elimination in the SIS stochastic models as time approaches infinity stands in contrast to the SIS deterministic model. Similarly, the potential for disease elimination during the peak period in the SIR stochastic model contrasts with the SIR deterministic model. The thesis also investigates the impact of factors like initial infection numbers, basic reproduction number, and population size on the epidemic’s duration and final size. For the models studied, it is observed that larger populations lead to longer epidemic durations, and the size of the epidemic increases when the initial number of infectives increased. Further, the thesis conducts a comparative analysis between stochastic and deterministic SIR models specifically for a model of COVID-19. A vaccination parameter, presumed to be 100% effective, is introduced to evaluate its impact on the expected time until disease extinction. Findings reveal that vaccination significantly accelerates the eradication of epidemics. Overall, this study highlights the crucial role of stochastic models in capturing uncertainties and variations that real-world epidemics may have, arising from factors like the unpredictable nature of interpersonal contact.
Guide to Common Parasites of Food Fish Species in the Northwest Territories and Nunavut
(Global Water Futures Northern Water Futures, 2024-03) Zabel, N; Swanson, Heidi; Conboy, G
Prepared by N. Zabel & Dr. H. Swanson, Wilfrid Laurier University, and reviewed by Dr. G. Conboy (DVM, PhD, DACVM), Atlantic Veterinary College, University of Prince Edward Island. Preparation of this guide was supported by Northern Water Futures (Global Water Futures; Canada First Research Excellence Fund). Reviews, photographs, and expert guidance was received as in-kind support from several individuals, and we gratefully acknowledge these important contributions. Funding for printing of guides distributed within Northwest Territories was provided by Government of Northwest Territories.
Drivers and Persuasive Strategies to Influence User Intention to Learn About Manipulative Design
(2024-04-18) Babaei, Pooria; Vassileva, Julita; McCalla, Gord; Stakhanova, Natalia
The proliferation of e-commerce, game, and social networking sites, has brought to light the use of "dark patterns" or, more generally, manipulative designs (MDs), which exploit psychological effects and cognitive biases of users to channel their behaviour toward outcomes that benefit the company or owner of the site, against the users' best interests. Previous research has categorized MDs, assessed their impact on users, gauged their prevalence, and attempted automated detection using computer vision and natural language processing techniques. However, limited attention has been given to understanding how to warn and educate users about MDs, guiding them to recognize and resist such manipulative tactics. To address this gap, we carried out a controlled study with n=135 participants, using a survey based on the Protection Motivation Theory (PMT) to better understand the motivations of people to learn about MDs. We also explored the effectiveness of two persuasive strategies, based on Cialdini's principles of influence (social influence and authority), to trigger attention towards MDs and intention to learn more about MDs and to avoid them. For this, we created a simulated application in a mobile app distribution platform modeled like Google Play Store containing a visual signal, a warning based on one of the two persuasive strategies, and simulated reviews from other users. The results indicate that two of the five PMT constructs - a higher Perceived Severity of MDs and a lower Perceived Response Cost of learning about MDs - have the most significant influence on the Intention to learn more about MDs. The participants in the experimental group, exposed to the two persuasive strategies exhibited a larger increase in their intention to seek information about MDs than the participants in the control group. Our study showcases the potential of a persuasive intervention, illustrating how mobile app distribution platforms can enhance user protection against MD exploitation. By implementing such interventions, these platforms can boost accountability and transparency of applications existing on their platform, and MD awareness among their users.
(2003-12) Faruquee, Rupam Reza; Kalagnanam, Suresh; Dobni, Brooke; Becker, Paul
Traditionally facilities management (FM) performance has not been analyzed extensively by FM professionals, especially in educational institutions. Performance management mostly focused on quantitative financial and operational measures. However, given the increased emphasis on customer orientation and optimum utilization of scarce resources it has become necessary to have a balanced view of FM performance. This study attempted to examine the performance indicators for FM operation in universities from Balanced Scorecard perspective. It collected data on the key performance indicators tracked by FM in North American (Canada and United States) universities. It also sought organizational issues that hindered benchmarking to other organizations. A survey was administered via four modes—mail, fax, e-mail and the internet (on-line completion) to FM directors in 200 North American universities seeking information on perceived importance of various performance indicators, and their measurement and usage. A response rate of eighteen percent was attained. The results indicated that although benchmarking and performance management is slowly gaining acceptance in universities there was an imbalance in the use of key performance indicators by FM professionals. Most of the indicators perceived important, measured, and/or used, were lag measures. Use of financial and operational indicators was also predominant. The study revealed that the association between measurement of performance indicators and their perceived importance was not always positive as anticipated. With respect to financial and customer perspectives there was a statistically significant negative correlation whereas for the other two perspectives the relationship was positive. The study also found no significant relationship between strategic development of FM departments and their drive towards CQI initiative. Relationship between strategy, and measurement and use of performance indicators were also unsubstantiated. The results also indicated that the influence of the respondent universities' background on their FM performance could not be properly explained. In future studies, case analysis may be performed to examine the effect of this variable on FM performance. Furthermore, BSC assertion that strategy-driven organizations would have performance indicators in place to track the progress towards the goal—needs to be revisited as well.
H.263 Based Facial Image Compression For Low Bit Rate Communications
(1998-01) Ding, Li; Takaya, Kunio
Video compression techniques have been widely used in videophone and videoconferencing applications to solve video bandwidth problems. H.263 is an international video compression standard. This standard specifies the format and algorithm for the video codec to send or receive video over low-bandwidth network connections. The software-based implementation of H.263 is much less expensive than the hardware-based implementation, but the encoding speed is very slow, which greatly affects the performance of visual communications. This thesis presents a modified H.263 video compression method which can improve the encoding speed. Since the motion vector search, DCT transform and bit rate control are the heart of the H.263 video encoding, the method presented in this thesis focuses on each of the encoding components to improve the computational efficiency. After applying proposed methods of face tracking, fast motion estimation, fast DCT transform and fast bit rate control, the new implementation of 11:263 has shown that the encoding speed improved by about 7 times compared with the original implementation. This thesis also presents an implementation of a videophone system based on the modified H.263 for the visual communication. This system has demonstrated that by employing the proposed methods in this thesis, videophone is now an affordable reality.
(1995) Dhakal, Pramod; Sachdev, M. S.
A power system network requires frequent changes in its configuration during normal operation. These changes are made by operating circuit breakers and isolators located in substations. In the past, switches were opened and closed by the operators using predefined guidelines. This task is performed in modern substations by digital computers either exclusively or in the form of assisting the operators in making proper decisions. The quality of power supplied to customers is enhanced if the computers automatically make appropriate switching decisions without significant delay. This thesis describes the design of a digital computer based substation switching scheme which employs generalized rules for interlocking and sequence switching. In addition to interlocking constraints, generatoion and load balance, ratings of equipment, and continuity of power supply are taken into consideration. When started for the first time, the program processes the data concerning the electrical layout of the substation and makes various lists and tables which are frequently used for making switching decisions. When changes in the configuration of a substation are requested, the program automatically determines alternative switching sequences and selects and implements the most economical sequence. It also assists the operator in the evaluation of abnormal circumstances. The program was tested by using five configurations of substations. It has been demonstrated that the program is suitable for implementing interlocking schemes and determining and implementing switching operations.
(1995-05) DeCorby, Raymond George; Kasap. S. O.; MacDonald R. I.
Frequency domain measurements of responsivity and modulation bandwidth were made for GaAs metal-semiconductor-metal (MSM) photodetector arrays. Photodetector arrays, in which several photodetectors drive a common signal bus, have applications as optical to electrical converters in optoelectronic switching and signal processing elements. The MSM detector is an attractive candidate for these applications due to its ease of fabrication and integration, its low dark current and noise, and its speed. Measurements for three detector sizes and two array sizes consistently showed that MSM detectors exhibit a position dependent response as part of an array. If the array is terminated in a load at one end of the common signal bus only, the bandwidth of the detectors increases with distance from the load. Terminating the bus at both ends reduces the position dependence of the response and significantly increases the half-power bandwidth of all detectors in the array. A test facility was designed and constructed to allow frequency testing of high speed photodetectors at 830 nm up to 18 GHz using a vector network analyzer. A temperature stabilized continuous wave laser light source is modulated using an electro-optic Y-fed balanced bridge modulator (YBBM) and the modulated light illuminates the test photodetectors. The system is calibrated using a commercially available, wideband InGaAs Schottky photodiode with known response.
(1970-01) Davidson, Gordon B.; Billington, R.
Many power system analysis problems which were previously considered intractable due to the large number of required calculations are now being solved using medium to large sized digital computers. This thesis presents three methods for calculating power system stability. The first is a fast reduction method, the second uses an iterative approach to solve the system matrix and the third, which is believed to be a new approach to system solution, involves removal by reduction of unimportant buses and a direct solution of the remaining equations. Generator modelling (including saturation and saliency) is included in this method. Three transient stability programs were written using these three methods of system solution. A load flow program was also written. Tests were undertaken to compare and assess the relative advantages and disadvantages of each method. The step-by-step technique was selected as the method used to solve the swing equation after comparison tests were made with the Runge-Kutta method. In another series of tests, the second and third methods were compared on their ability to handle non-linear loads. Both methods were successful but the third method was the most stable in a mathematical sense. A technique for modelling governors and exciters was discussed and tests successfully carried out to show some of their effects on system solutions.