Tim Barfoot

Canada Research Chair (Tier II) in Autonomous Space Robotics
BASc (Eng Sci Aero, Toronto), PhD (Toronto), PEng (Ontario)

Institute for Aerospace Studies
University of Toronto
4925 Dufferin Street, Room 189
Toronto, ON M3H 5T6 Canada
tim.barfoot [at] utoronto.ca
+1 416-667-7719 (office)
+1 416-667-7799 (fax)
skype: tim.barfoot
google scholar citations, arXiv, google+, calendar


Check out our self-driving car team!

If you're a UofT student, you may be interested in aUToronto, the UofT self-driving car team. We'll instrument a GM Bolt with sensors and program it to self-drive, then compete in a series of on-road challenges over the next 3 years against 7 other teams from across North America.

We're also looking for UofT faculty to get involved as advisors and we're always open to donations! Send me an email if you're interested.

We also have a mail list if you're just interested in getting updates about the team as we progress.

Book on State Estimation

For several years I've been teaching a graduate course on state estimation for robotics and have expanded my notes into a book:

State Estimation for Robotics (394 pages)
SO(3) and SE(3) Identities and Approximations (2 pages)

I've now turned over the manuscript to the publisher so can't make any more changes to the official first edition. However, if you find any typos/errors, please email me as I will continue to keep an up-to-date PDF here as well as a list of errata for the published version. Please make sure you have the latest version before filing a bug report.


The purpose of our lab's research program is to enable field robotics applications through advances in visual navigation of mobile robots. In recent years, we have developed a variety of visual techniques for robots including (i) long-range visual odometry (aided by celestial observations) and (ii) visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). We have also layered a planning framework on top of VT&R to allow a robot to build a network of reusable paths (NRP) autonomously while exploring a space. Imagine a robot finding its way down a long canyon and then realizing it is a dead-end; because it has saved the outbound route, it can backtrack along it using VT&R and then try something else. VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map.
Today we are interested in extending our ability to navigate visually to truly long durations (months or years) in order to enable real applications. We need to deal with changes in appearance (lighting, weather), in geometry (obstructions, dynamic objects), in our robots (hardware degradation/replacement/upgrades), and even in our algorithms. As a challenge, how could we build a map that a robot could use to navigate safely for 10 years? We plan to spend the next several years finding out.


AER521: Mobile Robotics and Perception (Winter 2015-present)
AER1514: Introduction to Mobile Robotics (Winter 2013-present)
AER1513: State Estimation for Aerospace Vehicles (Fall 2009-present)
AER407: Space Systems Design (Fall 2007-2012)
AER372: Control Systems (Winter 2011-2012)
MAT185: Linear Algebra (Winter 2008)
AER506: Spacecraft Dynamics and Control I (Fall 2001-2002)




Community Service

General Chair for Field and Service Robotics (FSR) 2015
Associate/Multimedia Editor for the International Journal of Robotics Research (IJRR) 2011-present
Associate Editor for the Journal of Field Robotics (JFR) 2012-present
Program Co-Chair of Computer and Robot Vision (CRV) 2012-13
Associate Editor for the IEEE International Conference on Robotics and Automation (ICRA) 2012
Area Chair for Robotics: Science and Systems (RSS) 2012-13