Hi HCI students,
This talk today is probably of your interest. I encourage you to attend, and also other upcoming job talks. Don’t limit yourself to HCI-specific ones!
— Fanny Chevalier Assistant Professor Department of Computer Science and Department of Statistical Sciences University of Toronto fanny@cs.toronto.edu http://fannychevalier.net/
Begin forwarded message:
From: Sven Dickinson sven@cs.toronto.edu Subject: reminder: 1st robotics candidate talk today at 11:00am in GB303 (Florian Shkurti) Date: 5 March 2018 at 08:05:54 GMT-5 To: dcsall@cs.utoronto.ca Cc: Sven Dickinson sven@cs.toronto.edu, sburns@cs.utoronto.ca, Konstantin Khanin khanin@math.toronto.edu, Goldie Nejat nejat@mie.utoronto.ca, Angela Schoellig schoellig@utias.utoronto.ca, Camille Angiers camille.angiers@utoronto.ca
A reminder of our first robotics talk today!
Date: Monday, March 5 Time: 11:00am - 12:00pm Location: Galbraith (GB) 303
Speaker: Florian Shkurti, McGill University Title: Enabling robot videographers to record the visual footage that human experts want
Abstract:
The adoption of robotics is becoming widespread in many sectors of society, most notably in the contexts of automated transportation, warehousing, and advanced manufacturing. Yet, for robots that operate in more challenging and unstructured natural domains (e.g. underwater, air, deserts, forests, lakes), where the promise of automated environmental monitoring presents exciting possibilities for societal progress, open research problems still abound.
In this talk I will focus on the problem of enabling robot videographers/documentarians that autonomously navigate in unstructured 3D environments, alongside scientists, to help them record visual footage that they deem valuable for their work.
I will present a method to infer the expert's reward function over images, using a small number of labeled and a large number of unlabeled examples. This reward function is used to guide the robot's exploration and data collection in unknown environments. I will also present vision-based algorithms for tracking and navigation that are robust to long-term loss of visual contact with the subject, by making use of the subject's learned behavior, estimated via inverse reinforcement learning. Finally, I will describe a visual and inertial localization and mapping method that enables robust navigation in a wide range of challenging environments. Experimental validation of these methods on underwater, aerial and ground robots will be shown.
Bio:
Florian Shkurti is a Ph.D. candidate in computer science and robotics at McGill University, working with Gregory Dudek. His research is at the intersection of mobile robotics, computer vision, and machine learning. His favorite research problems revolve around increasing the autonomy of mobile robots, and include: inverse reinforcement learning, imitation learning, the control of dynamical systems under uncertainty and partial observability, visibility-aware multi-robot path planning, as well as robust visual mapping and localization in 3D. He is a recipient of the Lorne Trottier Fellowship, the AAAI-15 Robotics Fellowship, and the NSERC Alexander Graham Bell CGS Doctoral Award. He is also a member of the Center for Intelligent Machines and the NSERC Canadian Field Robotics Network. He did his M.Sc. in computer science at McGill, and his Hon. B.Sc. in computer science and mathematics at the University of Toronto.