Tim Barfoot

BASc (Eng Sci Aero, Toronto), PhD (Toronto), PEng (Ontario), IEEE Fellow

Institute for Aerospace Studies
University of Toronto
4925 Dufferin Street, Room 189
Toronto, ON M3H 5T6 Canada
tim.barfoot [at] utoronto.ca
+1 416-667-7719 (office)
+1 416-667-7799 (fax)
skype: tim.barfoot
google scholar citations



Potential students, please refer to the main UTIAS site for the application procedure; the normal cycle is to submit your application in January to start in September of the same year. Please also note that despite the name of my lab, my interests in robotics have broadened from "space" applications to any situation involving navigation of mobile robots (see detailed research statement below).


The purpose of our lab's research program is to advance visual navigation of mobile robots. Our work finds application in transportation, planetary exploration, mining, warehouses, offices, and military scenarios.

Over the last 15 years, our lab spent a lot of time building and testing different navigation approaches. Much of our work is focused on a navigation stack we pioneered called visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map. We also spent a lot of time improving the robustness of visual localization in the presence of lighting and seasonal change.

Today we are quite interested in the idea of generalizability. New rich sensors are coming out all the time and to build something like VT&R, it takes a lot of software engineering and testing. Even porting navigation software from one robot to another similar robot inevitably involves tuning many parameters to maximize performance. The vision we are working towards is the idea of a generalized navigation framework that would work with any robot base and any rich sensor. The structure or template of the navigation framework can ideally stay the same, but the details need to be filled in for each new robot/sensor combination (e.g., how to model sensors and extract features?, how to model motions?, where are the sensors located on the robot?, what are the sensor calibration parameters?, what are the controller gains?). This is where data and machine learning can help us. We would like to be able to simply gather input/output data for our new robot, identify/learn all the necessary details for a given task, then auto-generate the navigation stack based on a template. We think this is possible and it will require carefully blending ideas from classical robotics with machine learning. Please have a look at our recent papers for progress towards this challenging goal.

Book on State Estimation

For several years I've been teaching a graduate course on state estimation for robotics and expanded my notes into a book that was published in 2017:

State Estimation for Robotics -- First Edition (399 pages)

I have also been working on a draft of a second edition for publication and welcome feedback (see foreword for changes from first edition):

State Estimation for Robotics -- Draft Second Edition (585 pages)

If you find any typos/errors, please email me as I will continue to keep an up-to-date unofficial copy here as well as a list of errata for the published version (see the preamble at the above link). Please make sure you have the latest version before filing a bug report.

The official first edition can be found on the Cambridge University Press page here; you might even be able to download the published version if your institution subscribes to Cambridge. A Chinese version of the book is available through the Xi'an Jiao Tong University Press here or as a pdf.

Some additional resources:

SO(3) and SE(3) Identities and Approximations (2 pages)
Review by Luca Carlone (MIT)
Lie Group Summary Sheet by Johann Laconte


AER1514: Introduction to Mobile Robotics (Winter 2013-2017, 2019-2020)
AER521: Mobile Robotics and Perception (Winter 2015-2017)
AER1513: State Estimation for Aerospace Vehicles (Fall 2009-2016, 2019)
AER372: Control Systems (Winter 2011-2012, 2019)
MAT185: Linear Algebra (Winter 2008)
AER407: Space Systems Design (Fall 2007-2012)
AER506: Spacecraft Dynamics and Control I (Fall 2001-2002)




Community Service

Foundation Board Member for Robotics: Science and Systems (RSS) 2019-present
General Chair for Field and Service Robotics (FSR) 2015
Associate Editor and Executive Board Member for the International Journal of Robotics Research (IJRR) 2011-present
Associate Editor for the Journal of Field Robotics (JFR) 2012-present
Program Co-Chair of Computer and Robot Vision (CRV) 2012-13
Associate Editor for the IEEE International Conference on Robotics and Automation (ICRA) 2012, 2017, 2019
Area Chair for Robotics: Science and Systems (RSS) 2012-13