The purpose of our lab's research program is to advance visual navigation of mobile robots. Our work finds application in transportation, planetary exploration, mining, warehouses, offices, and military scenarios.
Over the last 15 years, our lab spent a lot of time building and testing different navigation approaches. Much of our work is focused on a navigation stack we pioneered called visual teach and repeat (VT&R). VT&R has been particularly interesting in that it allows a robot to repeat a long (several kilometre) route that was taught manually, using only a single vision sensor (stereo camera, lidar, kinect) for feedback (no GPS needed). VT&R has been successful because it avoids the need to construct a visual map of the world in a single priviledged coordinate frame and instead utilizes a topometric map. We also spent a lot of time improving the robustness of visual localization in the presence of lighting and seasonal change.
Today we are quite interested in the idea of generalizability. New rich sensors are coming out all the time and to build something like VT&R, it takes a lot of software engineering and testing. Even porting navigation software from one robot to another similar robot inevitably involves tuning many parameters to maximize performance. The vision we are working towards is the idea of a generalized navigation framework that would work with any robot base and any rich sensor. The structure or template of the navigation framework can ideally stay the same, but the details need to be filled in for each new robot/sensor combination (e.g., how to model sensors and extract features?, how to model motions?, where are the sensors located on the robot?, what are the sensor calibration parameters?, what are the controller gains?). This is where data and machine learning can help us. We would like to be able to simply gather input/output data for our new robot, identify/learn all the necessary details for a given task, then auto-generate the navigation stack based on a template. We think this is possible and it will require carefully blending ideas from classical robotics with machine learning. Please have a look at our recent papers for progress towards this challenging goal.