Skip to main content
Ontario Tech acknowledges the lands and people of the Mississaugas of Scugog Island First Nation.

We are thankful to be welcome on these lands in friendship. The lands we are situated on are covered by the Williams Treaties and are the traditional territory of the Mississaugas, a branch of the greater Anishinaabeg Nation, including Algonquin, Ojibway, Odawa and Pottawatomi. These lands remain home to many Indigenous nations and peoples.

We acknowledge this land out of respect for the Indigenous nations who have cared for Turtle Island, also called North America, from before the arrival of settler peoples until this day. Most importantly, we acknowledge that the history of these lands has been tainted by poor treatment and a lack of friendship with the First Nations who call them home.

This history is something we are all affected by because we are all treaty people in Canada. We all have a shared history to reflect on, and each of us is affected by this history in different ways. Our past defines our present, but if we move forward as friends and allies, then it does not have to define our future.

Learn more about Indigenous Education and Cultural Services

September 5, 2012

Speaker: Mr. Kevin Jalbert, MSc student, Ontario Tech University

Title: Predicting Mutation Score Using Source Code and Test Suite Metrics

Abstract: Mutation testing has traditionally been used to evaluate the effectiveness of test suites and provide confidence in the testing process. Mutation testing involves the creation of any versions of a program each with a single syntactic fault. A test suite is evaluated against these program versions (mutants) in order to determine the percentage of mutants a test suite is able to identify (mutation score). A major drawback of mutation testing is that even a small program may yield thousands of mutants and can potentially make the process cost-prohibitive. To improve the performance and reduce the cost of mutation testing, we proposed a machine learning approach to predict mutation score based on a combination of source code and test suite metrics. We conducted an empirical evaluation of our approach to evaluate its effectiveness using eight open-source software systems. We achieved an average method-level prediction accuracy of 49.7920% using our eight test subjects. Experimentally we found a pair of configuration parameters that maximized our prediction accuracy over all our test subjects, without per-subject tuning. Finally, we demonstrated that it is not necessary to train on 90% of the available data in order to achieve near-optimal prediction accuracy.

Biography: Kevin Jalbert is a Computer Science MSc student in the Faculty of Science under the supervision of Dr. Jeremy Bradbury. He has published four peer-reviewed papers and was the recipient of the best paper award at the Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE 2012).