Skip to main content
Ontario Tech acknowledges the lands and people of the Mississaugas of Scugog Island First Nation.

We are thankful to be welcome on these lands in friendship. The lands we are situated on are covered by the Williams Treaties and are the traditional territory of the Mississaugas, a branch of the greater Anishinaabeg Nation, including Algonquin, Ojibway, Odawa and Pottawatomi. These lands remain home to many Indigenous nations and peoples.

We acknowledge this land out of respect for the Indigenous nations who have cared for Turtle Island, also called North America, from before the arrival of settler peoples until this day. Most importantly, we acknowledge that the history of these lands has been tainted by poor treatment and a lack of friendship with the First Nations who call them home.

This history is something we are all affected by because we are all treaty people in Canada. We all have a shared history to reflect on, and each of us is affected by this history in different ways. Our past defines our present, but if we move forward as friends and allies, then it does not have to define our future.

Learn more about Indigenous Education and Cultural Services

September 24, 2014

Speaker: Jordan Stadler, Ontario Tech University

Title: A Framework for Video-Driven Crowd Analysis

Abstract: We present a framework for video-driven crowd synthesis. The proposed framework employs motion analysis techniques to extract inter-frame motion vectors from the exemplar crowd video. Motion vectors collected over the duration of the video are processed to compute global motion paths. These paths encode the dominant motions observed during the course of the video. These paths are then fed into a behavior-based crowd simulation framework, which is responsible for synthesizing crowd animations that respect the motion patterns observed in the video. Our system synthesizes 3D virtual crowds by animating virtual humans along the trajectories returned by the crowd simulation framework. We also propose a new metric for comparing the visual similarity between the synthesized crowd and exemplar crowd. We demonstrate the proposed approach on crowd videos collected under different settings and the initial results appear promising.