University of Toronto develops tech for AV object-tracking
By onAnnouncements | Technology
The University of Toronto Institute for Aerospace Studies (UTIAS) believes it has developed a pair of multi-object tracking tools that can better detect the position and motion of objects, such as vehicles, pedestrians and cyclists, than current technology deployed on autonomous vehicles (AV), according to a news release.
Tracking information is collected from computer vision sensors (2D camera images and 3D LIDAR scans) and filtered at each time stamp, 10 times a second, to predict the future movement of moving objects, the release says.
“Once processed, it allows the robot to develop some reasoning about its environment. For example, there is a human crossing the street at the intersection, or a cyclist changing lanes up ahead,” Sandro Papais, PhD student in UTIAS in the Faculty of Applied Science & Engineering, says in the release. “At each time stamp, the robot’s software tries to link the current detections with objects it saw in the past, but it can only go back so far in time.”
The university introduced a graph-based optimization method, Sliding Window Tracker (SWTrack) via a new paper at the 2024 International Conference on Robotics and Automation in Yokohama, Japan last month. The method uses additional temporal information to prevent missed objects, the release says.
“SWTrack widens how far into the past a robot considers when planning,” says Papais. “So instead of being limited by what it just saw one frame ago and what is happening now, it can look over the past five seconds and then try to reason through all the different things it has seen.”
The team used data obtained through nuScenes, a dataset for AVs that have operated on roads in cities, to train and validate their algorithm, the release says. The data includes human annotations that the team used to benchmark the performance of SWTrack.
They learned that five seconds was the optimal time to improve tracking performance. Anything more than five seconds slowed computation time.
“Most tracking algorithms would have a tough time reasoning over some of these temporal gaps,” Papais said. “But in our case, we were able to validate that we can track over these longer periods of time and maintain more consistent tracking for dynamic objects around us.”
Papais says in the release that he plans to continue research on improving robot memory and extending the knowledge into other areas robotics infrastructure.
UncertaintyTrack is a second tool recently introduced by the university. The release says it is a collection of extensions for 2D tracking-by-detection methods that leverages probabilistic object detection.
“Probabilistic object detection quantifies the uncertainty estimates of object detection,” explains Chang Won (John) Lee, a co-author of a paper introducing the tech. “The key thing here is that for safety-critical tasks, you want to be able to know when the predicted detections are likely to cause errors in downstream tasks such as multi-object tracking. These errors can occur because of low-lighting conditions or heavy object occlusion.
“Uncertainty estimates give us an idea of when the model is in doubt, that is, when it is highly likely to give errors in predictions. But there’s this gap because probabilistic object detectors aren’t currently used in multi-tracking object tracking.”
Steven Waslander, director of UTIAS’s Toronto Robotics and AI Laboratory and a co-author on both the papers, said the advancements outlined in the two papers builds on work that his lab has been focusing on for a number of years.
“[The Toronto Robotics and AI Laboratory] has been working on assessing perception uncertainty and expanding temporal reasoning for robotics for multiple years now, as they are the key roadblocks to deploying robots in the open world more broadly,” Waslander says. “We desperately need AI methods that can understand the persistence of objects over time, and ones that are aware of their own limitations and will stop and reason when something new or unexpected appears in their path. This is what our research aims to do.”
IMAGES
Feature photo of Sandro Papais, University of Toronto PhD student, is the co-author of a new paper that introduces a graph-based optimization method to improve object tracking for self-driving cars/University of Toronto.