Skip to main content

Upcoming Student Presentations

CS Seminar Series: Graduate Student Presentations

Date: Thursday, February 26, 2026
Time:  11:00 am – 12:30 pm
Room: Online via Google Meet:  meet.google.com/hhz-doei-gpx
Number of Presentations:  3

Presentation 1:

Title of Presentation: Examining audio-based pseudo-haptics for enhancing mid-air interactions in VR
Presenter Name: Nour Halabi (PhD Candidate, Computer Science, Ontario Tech University)
Supervisor: Dr. Bill Kapralos (Faculty of Business and Information Technology, Ontario Tech University)

Abstract: This doctoral research investigates how audio-based pseudo-haptics can enhance mid-air interaction in immersive VR/AR environments, particularly in learning and training scenarios where dedicated haptic hardware is unavailable. While  visual-based pseudo-haptics has been widely explored, the role of audio-based pseudo-haptics to facilitate mid-air interactions remains underexamined. This gap is especially relevant for mobile and wearable immersive systems used in healthcare training, where effective controller-free interaction is important yet often ignored given the absence of physical feedback. The doctoral research aims to develop a multimodal framework for supporting mid-air manipulation through audio-based pseudo-haptics and to understand how such cues influence performance, perceived intuitiveness, and workload. As part of the ongoing work, a study is being designed to examine how contextual auditory feedback in  auditory-based pseudo haptics affects object sorting and stacking tasks within an augmented reality dementia-care training simulation. This phase of the research will compare interaction styles (direct grasping and indirect pinch-and-pull) with and without audio cues. It is anticipated that study results will guide refinements to the framework and allow for the development of more accessible, intuitive, and effective mid-air interactions for immersive mobile learning applications.


Presentation 2:

Title of Presentation: Uncertainty-aware fusion of foundation and task-specific models for cardiac MRI segmentation
Presenter Name: Mosarrat Rumman (MSc Candidate, Computer Science, Ontario Tech University)
Supervisor: Dr. Mehran Ebrahimi (Faculty of Science, Ontario Tech University), and Dr. Kourosh Heidar Davoudi (Faculty of Science, Ontario Tech University)

Abstract: Vision foundation models, such as the Segment Anything Model (SAM), demonstrate strong zero-shot generalization but lack precision with anatomically challenging objects. In contrast, convolutional neural network (CNN)-based models, such as nnU-Net, achieve high accuracy on domain-specific data but struggle to generalize on unseen data. To address these complementary limitations, we propose an uncertainty-aware fusion framework that integrates the generalizability of foundation models with the anatomical precision of task-specific models for cardiac MRI segmentation. The proposed approach combines Dempster-Shafer Theory (DST) with an entropy-guided fallback mechanism to perform voxel-wise fusion of calibrated probability maps. Unlike simple ensemble methods, the framework takes into consideration the inter-model agreement, conflict, and the uncertainty of the models. DST fusion is applied where the models agree, while high-conflict regions are handled by an entropy-guided fallback mechanism that selects predictions from the more reliable model. Extensive evaluation on the M&Ms dataset (in-domain) and the ACDC dataset (cross-domain) demonstrates consistent improvements in Dice and IoU across model pairings of varying strengths. In-domain gains are modest, whereas cross-domain evaluations show substantially larger improvements. Notably, the nn-UNet+SAM2 pairing achieves relative gains of approximately 8% in Dice and 11% in IoU on the cross-domain dataset. Comparisons with simple averaging, ablation studies, and statistical analysis confirm the effectiveness of the proposed framework. To our knowledge, this is the first application of voxel-wise DST-based fusion to combine vision foundation models with task-specific CNNs for cardiac MRI segmentation.


Presentation 3:

Title of Presentation: Reliability of machine learning models in bloodstain pattern analysis
Presenter Name: Ainaz Alavi (MSc Candidate, Computer Science, Ontario Tech University)
Supervisor: Dr. Peter Lewis (Faculty of Business and Information Technology, Ontario Tech University) and Dr. Theresa Stotesbury (Faculty of Science, Ontario Tech University)

Abstract: This work evaluates the performance and suitability of Machine Learning (ML) models in the context of bloodstain pattern analysis (BPA) by focusing on two critical performance metrics: accuracy and robustness. The research analyzes model performance under varying conditions to identify the factors influencing reliability. These findings provide critical insights into the challenges of forensic adaptation, ultimately ensuring that new technology meets the high precision required for bloodstain pattern interpretation.