Skip to main content
Ontario Tech acknowledges the lands and people of the Mississaugas of Scugog Island First Nation.

We are thankful to be welcome on these lands in friendship. The lands we are situated on are covered by the Williams Treaties and are the traditional territory of the Mississaugas, a branch of the greater Anishinaabeg Nation, including Algonquin, Ojibway, Odawa and Pottawatomi. These lands remain home to many Indigenous nations and peoples.

We acknowledge this land out of respect for the Indigenous nations who have cared for Turtle Island, also called North America, from before the arrival of settler peoples until this day. Most importantly, we acknowledge that the history of these lands has been tainted by poor treatment and a lack of friendship with the First Nations who call them home.

This history is something we are all affected by because we are all treaty people in Canada. We all have a shared history to reflect on, and each of us is affected by this history in different ways. Our past defines our present, but if we move forward as friends and allies, then it does not have to define our future.

Learn more about Indigenous Education and Cultural Services

October 3, 2012

Title: Visual Search for an Object in a 3D Environment using a Mobile Robot

Speaker: John K. Tsotsos, Department of Computer Science and Engineering, and Centre for Vision Research, York University, Toronto, Ontario

Abstract: Consider the problem of visually finding an object in a mostly unknown space with a mobile robot. It is clear that all possible views and images cannot be examined in a practical system. Visual attention is a complex phenomenon; we view it as a mechanism that optimizes the search processes inherent in vision. Here, we describe a particular example of a practical robotic vision system that employs some of these attentive processes. We cast this as an optimization problem, i.e., optimizing the probability of finding the target given a fixed cost limit in terms of total number of robotic actions required to find the visual target. Due to the inherent intractability of this problem, we present an approximate solution and investigate its performance and properties. We conclude that our approach is sufficient to solve this problem and has additional desirable empirical characteristics. Examples will be shown of the operation of the algorithm in both test and real domains, specifically in our autonomous wheelchair robot and on Honda's ASIMO humanoid robot.