Interactive Robot Learning from Non-Expert Human Teachers
Dec 2nd 10am-11am
Abstract: Today’s general-purpose robot learning policies are limited to 50-80% zero-shot performance on downstream tasks. To close this performance gap, deployed robots face the ongoing challenge of continually learning and adapting to diverse downstream tasks on demand under human guidance. Existing frameworks allow experts to guide robots in various ways including providing reward functions, demonstrations, and corrective feedback. However, most robots will only have access to non-expert users for guidance after deployment. At the same time, traditional machine learning methods often used in robot learning are tethered to the expectation of informative and near-optimal data. Novice human teachers—rich in practical experience yet lacking in robotics and engineering knowledge—bring data that strays from this ideal. My research addresses the challenges of robot learning brought by non-expert teachers and enables robots to effectively learn under non-expert human guidance, including 1) developing active reward learning algorithms to allow the robot to take an active role in learning by asking informative questions, 2) leveraging a hybrid action representation for imitation learning that is more robust to suboptimal demonstrations, and 3) enabling the robot to interpret the human teacher’s natural feedback in the form of facial expressions, language, and gestures. Tapping into diverse sources of non-expert human feedback can lead to successful robot policies that can effectively work alongside humans and learn from them.
Speaker Info: Prof. Yuchen Cui
Yuchen Cui is currently an Assistant Professor of Computer Science at UCLA. Prior to UCLA, she was a postdoc in the Computer Science Department at Stanford University and a fellow of the Stanford Institute for Human-Centered AI. Yuchen’s research focuses on interactive robot learning and specifically on how to enable low-effort teaching for non-expert users. Yuchen obtained her Ph.D. in Computer Science from the University of Texas at Austin. Her dissertation is titled “Efficient algorithms for low-effort human teaching of robots”. During her graduate studies, Yuchen also conducted internships at Honda Research Institute, Diligent Robotics, and Facebook AI Research.
3D Computer Vision and its evolution from depth sensors to novel view synthesis
Dec 2nd 1pm-2pm
https://theialab.ca
Abstract: Over the last decade, computer vision has undergone a remarkable transformation, with many difficult problems now considered solved. However, it is important to recognize that much of this progress is often confined to 2D images and videos, which only provide superficial understanding of the underlying 3D structure. Achieving a comprehensive understanding of 3D scenes remains largely an unsolved challenge. Solving this problem is crucial, as computer vision shifts from passive tasks, like search and surveillance, to more active applications that require 3D modeling, such as those that drive the decision-making process of embodied autonomous systems that interact with the 3D environment. In this talk, I will present my research journey through these areas, beginning with real-time processing techniques for Kinect-style sensors in computer graphics leading to the development of neural 3D representations, and their unsupervised training through “novel-view synthesis” objectives.
Speaker Info: Prof. Andrea Tagliasacchi
Andrea Tagliasacchi is an associate professor at Simon Fraser University (Vancouver, Canada) where he holds the appointment of Visual Computing Research Chair within the school of computing science. He is also a part-time (20%) staff research scientist at Google DeepMind (Toronto, Canada), as well as an associate professor (status only) in the computer science department at the University of Toronto. Before joining SFU, he spent four wonderful years as a full-time researcher at Google (mentored by Paul Lalonde, Geoffrey Hinton, and David Fleet). Before joining Google, he was an assistant professor at the University of Victoria (2015-2017), where he held the Industrial Research Chair in 3D Sensing (jointly sponsored by Google and Intel). His alma mater include EPFL (postdoc) SFU (PhD, NSERC Alexander Graham Bell fellow) and Politecnico di Milano (MSc, gold medalist). Several of his papers have received best-paper award nominations at top-tier graphics and vision conferences, and he is the recipient of the 2015 SGP best paper award, the 2020 CVPR best student paper award, and the 2024 CVPR best paper award (honorable mention). His research focuses on 3D visual perception, which lies at the intersection of computer vision, computer graphics and machine learning. For more information, please visit https://theialab.ca.
Winter Is Coming - Extending Self-Driving Perception to Adverse Weather
Dec 3rd 11am-12pm
Abstract: Self-driving vehicles are becoming increasingly capable and closer to large-scale deployment. However, they are primarily restricted to operation in a few specific cities with favorable weather conditions. In this talk, I will describe my research into enabling self-driving perception to succeed in adverse weather, be it snow, rain or fog. I will outline the many challenges that degraded sensing presents, and identify the key limitations of current learning techniques which struggle to generalize to the wide variety of conditions that autonomous vehicles can encounter. I will then discuss ongoing projects in my lab that are investigating generalization, adaptation, distillation and self-supervised learning as techniques to help rapidly expand the operational domain of perception networks, without heavy reliance on new labels in each domain. This work will accelerate deployment of large autonomous vehicle fleets into new cities with more varied weather conditions.
Speaker Info: Prof. Steven Waslander
Prof. Steven Waslander is a leading authority on autonomous aerial and ground vehicles, including multirotor drones and autonomous driving vehicles. Simultaneous Localization and Mapping (SLAM) and multi-vehicle systems. He received his B.Sc.E.in 1998 from Queen’s University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics, where as a graduate student he created the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), the world’s most capable outdoor multi-vehicle quadrotor platform at the time. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. He was recruited to Waterloo from Stanford in 2008, where he founded and directs the Waterloo Autonomous Vehicle Laboratory (WAVELab), extending the state of the art in autonomous drones and autonomous driving through advances in localization and mapping, object detection and tracking, integrated planning and control methods and multi-robot coordination. In 2018, he joined the University of Toronto Institute for Aerospace Studies (UTIAS), and founded the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab). He is an active member of the University of Toronto Robotics Institute, for which he acts as Chair of the Partner Consortium Committee.
Digital Pathology Meets Spatial Omics - Emerging Problems in Data Integration, Solutions, and New Opportunities
Dec 3rd 1pm-2pm
Abstract: This talk will introduce parallel advancements of two emerging fields, computational pathology and spatial –omics, in the modern era of biomedical sciences. Accordingly, my team leverages computational image analysis tools and best engineering practices to integrate spatial –omics datasets with their associated histology images, to draw meaningful conclusions. We work to fundamentally understand cell type and cell state compositions and underlying quantitative morphometric features at various scales from transcripts to tissue microanatomy. Additionally, I will highlight our ongoing efforts within the Human Biomolecular Atlas Project (HuBMAP), a consortium spanning 42 sites, focused on creating an atlas of the human body at the cellular level using spatial technologies. Moreover, I will discuss the detection and segmentation of multiple cell types and cell states as well as tissue microanatomy exclusively from brightfield histology images. Furthermore, I’ll explore several use-case studies of these tools including use in kidney disease trajectory prediction, relevant to the NIH Kidney Precision Medicine Project (KPMP) consortium, and distinguishing glomeruli with chronic and acute injury. Additionally, I will demonstrate our cloud-based open-source distributed software systems (FUSION Functional Unit State IdentificatiON in Whole Slide Images, accessible at http://fusion.hubmapconsortium.org/, and CompRePS Computational Renal Pathology Suite, accessible at https://athena.rc.ufl.edu/). These systems are designed to conduct various computational image analysis tasks related to digital pathology, starting with the analysis of brightfield histology images and extending to the integration of histology with spatial omics data. We’ll conclude by discussing new opportunities and potential directions for collective contributions in the field of computational pathology.
Speaker Info: Prof. Pinaki Sarder
Pinaki Sarder is currently an associate professor of AI in the Section of Quantitative Health of the Department of Medicine, as well as the Associate Director for Imaging in the Intelligent Critical Care Center at the University of Florida (UF). Before joining UF, he was an associate professor in the Departments of Pathology & Anatomical Sciences and Biomedical Engineering at the University at Buffalo (UB), where he was at the center of building the computationally enabled graduate program Computational Cell Biology, Anatomy, and Pathology. Prior to UB, he completed post-doctoral training at Mallinckrodt Institute of Radiology at the Washington University in St. Louis (WUSTL) School of Medicine. He received his B.Tech. degree in electrical engineering from the Indian Institute of Technology, Kanpur, in 2003, and M.Sc. and Ph.D. degrees in electrical engineering from WUSTL in 2010.