CVIS 2022

Post deployment considerations for AI in Radiology

Abstract:

Speaker Info: Dr. Matthew Lungren

Chief Medical Information Officer, Nuance Communications

Dr. Lungren is Chief Medical Information Officer at Nuance Communications, a Microsoft Company. As a physician and clinical machine learning researcher, he maintains a part-time interventional radiology practice at UCSF while also serving as adjunct faculty for other leading academic medical centers including Stanford and Duke.

Prior to joining Microsoft, Dr Lungren was an interventional radiologist and research faculty at Stanford University Medical School where he led the Stanford Center for Artificial Intelligence in Medicine and Imaging (AIMI). More recently he served as Principal for Clinical AI/ML at Amazon Web Services in World Wide Public Sector Healthcare, focusing on business development for clinical machine learning technologies in the public cloud.

His scientific work has led to more than 100 publications, including work on multi-modal data fusion models for healthcare applications, new computer vision and natural language processing approaches for healthcare specific domains, opportunistic screening with machine learning for public health applications, open medical data as public good, and prospective clinical trials for clinical AI translation. He has served as advisor for early stage startups and large fortune-500 companies on healthcare AI technology development and go-to-market strategy. Dr. Lungren is frequently featured in national news outlets such as NPR, Vice News, Scientific American, and he regularly speaks at national and international scientific meetings on the topic of AI in healthcare.

Dr. Lungren is also a top rated instructor on Coursera where his AI in Healthcare course designed especially for learners with non-technical backgrounds has been completed by more than 10k students around the world - enrollment is open now: https://www.coursera.org/learn/fundamental-machine-learning-healthcare

Synthetic Data for Computer Vision and Agile Robotic Manipulation

Abstract:

Speaker Info: Gavriel State

Senior Director for Simulation and AI at NVIDIA

Gavriel State is a Senior Director for Simulation and AI at NVIDIA, based in Toronto, where he leads efforts involving applications of AI technology to simulation systems and vice versa. This includes work on synthetic data generation through the Omniverse Replicator system, reinforcement learning and sim-to-real robotics transfer with Isaac Gym and Isaac Sim, as well as supporting the development of 3D reconstruction technologies.

Previously, Gavriel founded TransGaming Inc, and spent 15 years focused on real-time 3D rendering, pioneering the use of 3D API portability approaches for cross platform gaming with the WINE Windows compatibility environment, leading efforts to support WebGL in Google’s Chrome browser through ANGLE, and managing work on the SwiftShader software 3D renderer.

Gavriel is a graduate of the University of Waterloo’s Systems Design Engineering program.

Physical Knowledge-Informed Learning Adaptation for Internet-of-Things

Abstract: The number of everyday Internet-of-Things (IoT) devices is projected to grow to the billions in the coming decade, which enables various smart building applications. These applications, especially in-home long-term occupant monitoring, rely on the emerging non-intrusive sensing techniques. The acquired IoT sensing data are often of varying data efficiency/quality due to the system and/or deployment constraints, and sensing data distributions can change significantly under different sensing conditions. Therefore, from the data/learning perspective, accurate information learning through pure data-driven approaches requires a large amount of labeled data, which is costly and difficult to obtain in real-world applications. We address these challenges by combining physical and data-driven knowledge to reduce label data needed via physical knowledge-guided model transfer. In this talk, we use structural vibration-based occupant sensing applications to evaluate our model transfer schemes.

Speaker Info: Dr. Shijia Pan

University of California Merced

Dr. Shijia Pan is an Assistant Professor at the University of California Merced. She received her bachelor’s degree in Computer Science and Technology from the University of Science and Technology of China and her Ph.D. degree in Electrical and Computer Engineering from Carnegie Mellon University. Her research interests include cyber-physical sensing systems (CPS), multimodal learning for CPS/IoT, and ubiquitous computing. She worked in multiple disciplines and focused on indoor human information acquisition through ambient sensing. She has published in both top-tier Computer Science ACM/IEEE conferences and high-impact Civil Engineering journals. She received Rising Stars in EECS, Nick G. Vlahakis Graduate Fellowship, Google Anita Borg Scholarship, Best Paper Awards (IoTDI, ASME SHM/NDE, HASCA), Best Poster Awards (SenSys, IPSN), Best Demo Award (Ubicomp, BuildSys), Best Presentation Award (SenSys Doctoral Colloquium), and Audience Choice Award (BuildSys) from ACM/IEEE conferences.

Combine remote sensing and machine learning in support of digital agriculture

Abstract: Maintaining and improving the productivity and resiliency of our agricultural and food systems, while simultaneously mitigating and adapting to climate change in the face of an uncertain future and increasingly competitive uses of limited resources, represents a grand challenge of our time. My research, as an interdisciplinary study, endeavors to address this challenge by combining advanced sensing systems with computational engineering technologies. In this presentation, I will introduce our lab’s recent accomplishments and ongoing research work include 1) Combining satellite remote sensing with deep learning/machine learning for large-scale crop monitoring and management decision making; 2) Combining unmanned aerial vehicles (UAVs) based high-resolution images with deep learning/machine learning for fine-scale high-throughput plant phenotyping and other precision agricultural applications; and 3) Cyber-infrastructure tools development for agricultural decision making.

Speaker Info: Dr. Zhou Zhang

Assistant Professor, University of Wisconsin-Madison, USA

Dr. Zhou Zhang received her B.S degree in astronautics engineering and M.S. degree in instrumentation science and opto-electronics engineering from Beihang University, Beijing, China, in 2010 and 2013, respectively. Then, she got her Ph.D. degree in 2017 in geomatics, civil engineering at Purdue University, USA. Her dissertation topic is about developing new machine learning methods for hyperspectral remote sensing data classification. During 2017-2019, she worked as a Postdoc Scholar at the University of California, Davis on almond yield prediction using satellite remote sensing (Landsat and others) and machine learning.

She is currently an Assistant Professor in Biological Systems Engineering in College of Agriculture and Life Science at the University of Wisconsin-Madison, USA. Her research interests include satellite remote sensing (Landsat, MODIS, Sentinel, etc), drone-based imaging platform developments for precision agriculture, multi-source remote sensing data fusion, artificial intelligence and machine learning in agricultural applications. Dr. Zhang has over 40 publications in peer-reviewed journals and conferences. Dr. Zhang was a recipient of the Best Student Paper (third place) in 2016 IEEE IGARSS Student Paper Competition. For more details, please visit Dr. Zhang’s lab website https://digitalag.bse.wisc.edu

CVIS 2021

CVIS 2020

How Experimental Psychology Can Help Explainable Artificial Intelligence

Wednesday, November 25th, 2020 at 9:00 am - 10:15 am (EST)

Please register to receive the URL via Email

Abstract: Artificial intelligence powered by deep neural networks has reached a level of complexity where it can be difficult or impossible to express how a model makes its decisions. This black-box problem is especially concerning when the model makes decisions with consequences for human well-being. In response, an emerging field called explainable artificial intelligence (XAI) aims to increase the interpretability, fairness, and transparency of machine learning. In this talk I will describe how cognitive psychologists can make contributions to XAI. The human mind is also a black box, and cognitive psychologists have over one hundred and fifty years of experience modeling it through experimentation. We aim to translate the methods and rigour of cognitive psychology to the study of artificial black boxes in the service of explainability.

The outline of the talk is as follows: I will provide a review of XAI for non-experts, arguing that current methods possess a blind spot that can be complemented by the experimental cognitive tradition. I will then provide a framework for research in XAI, and highlight exemplary cases of experimentation within XAI inspired by psychological science. Today’s XAI methods rely on access to model architecture and parameters that is not always feasible for most users, practitioners, and regulators. I will end the talk by describing a psychology-inspired technique that uses response times (RTs). RTs are observable without privileged access to the model. Moreover, dynamic inference models performing conditional computation generate variable RTs for visual learning tasks. These depend on how the hierarchy of features learned by the network are utilized in prediction. I will show how this method can cast a light on deep neural nets without opening up the black box.

Speaker Info: Graham Taylor

Associate Professor of Engineering at the University of Guelph Faculty Member, Vector Institute Canada CIFAR AI Research Chair

Graham Taylor is a Canada Research Chair and Associate Professor of Engineering at the University of Guelph. He directs the University of Guelph Centre for Advancing Responsible and Ethical AI and is a member of the Vector Institute for AI. He has co-organized the annual CIFAR Deep Learning Summer School, and trained more than 60 students and researchers on AI-related projects. In 2016 he was named as one of 18 inaugural CIFAR Azrieli Global Scholars. In 2018 he was honoured as one of Canada’s Top 40 under 40. In 2019 he was named a Canada CIFAR AI Chair. He spent 2018-2019 as a Visiting Faculty member at Google Brain, Montreal.

Graham co-founded Kindred, which was featured at number 29 on MIT Technology Review’s 2017 list of smartest companies in the world. He is the Academic Director of NextAI, a non-profit accelerator for AI-focused entrepreneurs.

Radiomics and Radio-genomics: Opportunities for Precision Medicine

Thursday, November 26th, 2020 at 9:00 am - 10:15 am (EST)

Please register to receive the URL via Email

Abstract: In this talk, Dr. Tiwari will focus on her lab’s recent efforts in developing radiomic (extracting computerized sub-visual features from radiologic imaging), radiogenomic (identifying radiologic features associated with molecular phenotypes), and radiopathomic (radiologic features associated with pathologic phenotypes) techniques to capture insights into the underlying tumor biology as observed on non-invasive routine imaging. She will focus on applications of this work for predicting disease outcome, recurrence, progression and response to therapy specifically in the context of brain tumors. She will also discuss current efforts in developing new radiomic features for post-treatment evaluation and predicting response to chemo-radiation treatment. Dr. Tiwari will conclude her talk with a discussion of some of the translational aspects of her work from a clinical perspective.

Speaker Info: Pallavi Tiwari

Assistant Professor, Department of Biomedical Engineering, School of Medicine ​Case Center for Imaging Research Case Western Reserve University, USA

Dr. Pallavi Tiwari is an Assistant Professor of Biomedical Engineering and the director of Brain Image Computing Laboratory at Case Western Reserve University. She is also a member of the Case Comprehensive Cancer Center. Her research interests lie in machine learning, data mining, and image analysis for personalized medicine solutions in oncology and neurological disorders. Her research has so far evolved into over 50 peer-reviewed publications, 50 peer-reviewed abstracts, and 9 patents (3 issued, 6 pending). Dr. Tiwari has been a recipient of several scientific awards, most notably being named as one of 100 women achievers by Government of India for making a positive impact in the field of Science and Innovation. In 2018, she was selected as one of Crain’s Business Cleveland Forty under 40. In 2020, she was awarded the J&J Women in STEM (WiSTEM2D) scholar award in Technology. Her research is funded through the National Cancer Institute, Department of Defense, Johnson & Johnson, V Foundation Translational Award, Dana Foundation, State of Ohio, and the Case Comprehensive Cancer Center.

Probabilistic Object Detection for Autonomous Driving: Moving Beyond Detection Accuracy

Thursday, November 26th, 2020 at 1:00 pm - 2:15 pm (EST)

Please register to receive the URL via Email

Abstract: Modern object detection has relentlessly pursued perfection on a single metric: mean average precision, with impressive gains in performance across datasets and sensor types over that last few years. I will discuss our multiple contributions in this domain, particularly in 3D object detection with monocular, stereo and LIDAR/camera fusion. Our work has regularly topped the Kitti Vision Benchmark, and emphasizes the value of attention focused on object shape to enhance localization and extent estimation. Despite the strong progress in this domain, current network outputs are primarily deterministic, providing little visibility beyond class confidence as to the probability a detection is accurate. This makes current object detectors a black box for downstream processes such as tracking and prediction, and can lead to over confidence in low-quality detection. To help address this challenge, I will discuss our recent work on probabilistic object detectors (PODs) on two fronts. First, I will describe efforts to place the evaluation of PODs on a secure footing, by introducing proper scoring rules with both local and global extent that can determine whether a predictive distribution is both well calibrated and discriminative. Then, I will discuss our work BayesOD, a novel probabilistic object detector that exhibits strong output distribution prediction capabilities and outperforms existing PODs in terms of calibration and sharpness.

Speaker Info: Steven L. Waslander, PhD

Associate Professor, Institute for Aerospace Studies Director, Toronto Robotics and AI Laboratory University of Toronto

Prof. Steven Waslander is a leading authority on autonomous aerial and ground vehicles, including multirotor drones and autonomous driving vehicles. Simultaneous Localization and Mapping (SLAM) and multi-vehicle systems. He received his B.Sc.E. in 1998 from Queen’s University, his M.S. in 2002 and his Ph.D. in 2007, both from Stanford University in Aeronautics and Astronautics, where as a graduate student he created the Stanford Testbed of Autonomous Rotorcraft for Multi-Agent Control (STARMAC), the world’s most capable outdoor multi-vehicle quadrotor platform at the time. He was a Control Systems Analyst for Pratt & Whitney Canada from 1998 to 2001. He was recruited to Waterloo from Stanford in 2008, where he founded and directs the Waterloo Autonomous Vehicle Laboratory (WAVELab), extending the state of the art in autonomous drones and autonomous driving through advances in localization and mapping, object detection and tracking, integrated planning and control methods and multi-robot coordination. In 2018, he joined the University of Toronto Institute for Aerospace Studies (UTIAS), and founded the Toronto Robotics and Artificial Intelligence Laboratory (TRAILab). ​Prof. Waslander’s innovations were recognized by the Ontario Centres of Excellence Mind to Market award for the best Industry/Academia collaboration (2012, with Aeryon Labs), best paper and best poster awards at the Computer and Robot Vision Conference (2018), and through two Outstanding Performance Awards, and two Distinguished Performance Awards while at the University of Waterloo. His work on autonomous vehicles has resulted in the Autonomoose, the first autonomous vehicle created at a Canadian University to drive on public roads. His insights into autonomous driving have been featured in the Globe and Mail, Toronto Star, National Post, the Rick Mercer Report, and on national CBC Radio. He is Associate Editor of the IEEE Transactions on Aerospace and Electronic Systems, has served as the General Chair for the International Autonomous Robot Racing Competition (IARRC 2012-15), as the program chair for the 13th and 14th Conference on Computer and Robot Vision (CRV 2016-17), and as the Competitions Chair for the International Conference on Intelligent Robots and Systems (IROS 2017).

Real-time Sport Analytics from Broadcast Feeds

Friday, November 27th, 2020 at 1:00 pm - 2:15 pm (EST)

Please register to receive the URL via Email

Abstract: Sports analytics is about observing, understanding, and describing the game in an intelligent manner. In practice, this means designing a fully-automated, robust, end-to-end pipeline; from visual input, to player and group activities, to player and team evaluation, to planning. Despite major advancements in computer vision and machine learning, sports analytics is still in its infancy and relies mostly on manually collected data. This talk focuses on the use of broadcast feed for sport analytics, covers the components of a vision system for data acquisition, provides examples of how Sportlogiq captures the data from broadcast videos and finally describes the challenges of deploying vision systems at scale and what Sportlogiq has learnt by processing games from more that 20 different soccer leagues.

Speaker Info: Mehrsan Javan, PhD, MBA

Chief Technology Officer at Sportlogiq Adjunct Professor at the Electrical and Computer Engineering Department, McGill University

Mehrsan Javan is the co-founder and CTO of Sportlogiq. Mehrsan holds a PhD degree in electrical engineering (McGill University, 2014) and MBA (2017) with over a decade of experience in building intelligent systems. His passion is new technologies with a particular interest in intelligent systems and their positive impacts on our daily life. He is also an adjunct faculty member at ECE department, McGill University and has published numerous research articles in the fields of computer vision, machine learning, sport analytics and holds several patents and patents pending applications. His main research interest is explainability and causal inference in artificial intelligence.

CVIS 2019

How to Build an Applied Research Group in The Financial Services Industry

Abstract: Artificial Intelligence is gaining broad adoption in the financial industry, and with good reason. AI provides the opportunity to create new revenue streams, optimize operations and make better investment decisions. In this talk, I will describe some of the challenges and strategies in creating an AI group with the rare skill set that is required to truly leverage AI in finance. I will also describe some of the most impactful use cases in the industry, and propose a hypothesis of where we may want to be in five years.

Speaker Info: Yevgeniy Vahlis

Head of Artificial Intelligence Technology at BMO

Yevgeniy Vahlis is the Head of the Artificial Intelligence Technology group at BMO Financial Group. Prior to joining BMO, Yevgeniy built machine learning research groups at Borealis AI and Georgian Partners and worked closely with startups to introduce applied research into their products. Yevgeniy kicked off his career at AT&T Labs in New York as a research scientist after completing his PhD in Computer Science at the University of Toronto and a year of postdoctoral studies at Columbia University.

Multi-View 3D Reconstruction of Atomic Resolution Biomolecules

Abstract: Electron Cryo-microscopy (Cryo-EM) is a vision-based technique for estimating the 3D structure of biological molecules at atomic resolutions. It addresses one of the foremost problems in biology, namely macromolecular structure discovery. The problem, in a nutshell, is a form of multi-view 3D structure determination, inferring the 3D electron density of a particle from a large set of 2D images from an electron microscope. I’ll outline the nature of the problem and several contributions that have led to a new generation of cryo-EM algorithms that are reshaping structural biology.

Speaker Info: David J. Fleet

Professor, Dept. of Computer Science and Dept. of Computer and Mathematical Sciences, University of Toronto Faculty Member, Vector Institute Associate Research Director, Industry Innovation, Vector Institute Senior Fellow, Canadian Institute for Advanced Research Canada CIFAR Artificial Intelligence Chair

David Fleet is Professor of Computer Science at the University of Toronto and Faculty Member of the Vector Institute. He received the PhD in Computer Science from the University of Toronto in 1991. From 1991 to 2000 he was on faculty at Queen’s University, Canada, in the Department of Computing and Information Science, with cross-appointments in Psychology and Electrical Engineering. In 1999 he joined the Palo Alto Research Center (PARC) where he managed the Digital Video Analysis Group and the Perceptual Document Analysis Group. He returned to the University of Toronto in October 2003. He served as Chair of the Department of Computer and Mathematical Sciences, University of Toronto Scarborough from 2012 to 2017. In 1996 Dr. Fleet was awarded an Alfred P. Sloan Research Fellowship for his research on biological vision. His 1999 paper with Michael Black on probabilistic detection and tracking of motion boundaries received Honorable Mention for the Marr Prize at the IEEE International Conference on Computer Vision (ICCV). His 2001 paper with Allan Jepson and Thomas El-Maraghi on robust appearance models for visual tracking was awarded runner-up best paper at the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). In 2003, his paper with Eric Saund, James Mahoney and Dan Larner won the best paper award at ACM UIST ‘03. With Francisco Estrada and Allan Jepson, he won the best paper award at the British Machine Vision Conference (BMVC) in 2009. In 2010, his work with Michael Black and Hedvig Sidenbladh on human pose tracking received the Koenderink Prize for fundamental contributions to computer vision that withstood the test of time. He has served as Area Chair for numerous major computer vision and machine learning conferences. He was Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (2000-2004), Program Co-Chair for the IEEE Conference on Computer Vision and Pattern Recognition in 2003, Associate Editor-In-Chief for IEEE Transactions on Pattern Analysis and Machine Intelligence (2005-2008), Program Co-Chair of ECCV 2014, and Senior Fellow of the Canadian Institute of Advanced Research (2005-2019). He currently serves on the Advisory Board for IEEE PAMI. His research interests include computer vision, image processing, visual perception, and visual neuroscience. He has published research articles and one book on various topics including the estimation of optical flow and stereoscopic disparity, probabilistic methods in motion analysis, 2D visual tracking, 3D people tracking and hand tracking, modeling appearance in image sequences, physics-based models of human motion analysis, non-Fourier motion and stereo perception, and the neural basis of stereo vision.

CVIS 2018

Colour and Consumer Cameras: The Good, the bad, and the ugly

Abstract: Cameras are now used for many purposes beyond taking photographs. Example applications include remote medical diagnosis, crop monitoring, 3d reconstruction, document recognition, and many more. For such applications, it is desirable to have a camera act as a sensor that directly measures scene light. The problem, however, is that most commodity cameras apply a number of camera-specific processing steps to the captured image in order to produce visually pleasing photos. As a result, different cameras produce noticeably different colors when imaging the exact same scene. This is problematic for applications relying on color because algorithms developed using images from one camera often will not work with images captured on another camera due to color differences. In this talk, I’ll discuss the current state of affairs for color on commodity cameras, common incorrect assumptions made in the scientific literature regarding image color, and recent developments that are helping to improve the situation.

Speaker Info: Michael S. Brown

Canada Research Chair in Computer Vision, York University

Michael S. Brown is a professor and Canada Research Chair in Computer Vision at York University in Toronto. Before joining York in 2016, he spent 14 years in Asia, working at the Hong Kong University of Science and Technology, Nanyang Technological University, and National University of Singapore. Dr. Brown’s research is focused on computer vision, image processing, and graphics. Dr. Brown routinely serves as an area chair for the major computer vision conferences (CVPR, ICCV, ECCV, and ACCV) and served as an associate editor for the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI) from 2011 to 2016. He is currently an associate editor for the International Journal of Computer Vision.

Objectives for Image Segmentation

Abstract: Segmentation of images is an essential part of many computer vision problems. It is important for biomedical imaging, 3D reconstruction, motion analysis, scene understanding, autonomous driving, etc. In particularly simple cases, the problem may be trivially solved by independent processing of pixels (e.g. thresholding), but more typically segmentation is an ill-posed problem requiring additional constraints to compensate for data ambiguities. Such regularization constraints may represent prior knowledge on shape, geometry, topology, structure, or physical properties of segmented objects. This talk discusses standard and some newer objectives (loss functions) for image segmentation, their optimization methods, limitations, and reviews their use in unsupervised and weakly-supervised settings, including semantic CNN segmentation.

Speaker Info: Yuri Boykov

University of Waterloo

Yuri Boykov is a Professor at Cheriton School of Computer Science at the University of Waterloo.. He is also an adjunct Professor of Computer Science at Western University. His research is concentrated in the area of computer vision and biomedical image analysis with focus on modeling and optimization for structured segmentation, restoration, registration, stero, motion, model fitting, recognition, photo-video editing, and other data analysis problems. He is an editor for the International Journal of Computer Vision (IJCV). His work was listed among 10 most influencial papers in IEEE Transactions of Pattern Analysis and Machine Intelligence (TPAMI Top Picks for 30 years). In 2017 Google Scholar listed his work on segmentation as a “ classic paper in computer vision and pattern recognition “ (from 2006). In 2011 he received Helmholtz Prize from IEEE and Test of Time Award by the International Conference on Computer Vision. The Faculty of Science at the University of Western Ontaio recognized his work by awarding Distinguished Research Professorship in 2014 and Florence Bucke Prize in 2008.

CVIS 2017

Deep Learning in the Enterprise: the Potential, the Pitfalls, and the Possibilities

Abstract: In the past couple years, developments in Artificial Intelligence have captured the public imagination. Google and Facebook are designing systems that understand images well enough to describe them in words. Neural networks can translate language as well as trained experts, idioms and innuendos included. Even artistic pursuits, the supposed privilege of homo sapiens, are being encroached by the burgeoning field of Computational Creativity. But how do these powerful technologies manifest themselves in practice? Taking an industry-centric approach, we’ll examine such questions as:

  • What do these technologies portend for enterprise clients? For example, for financial services?
  • What are some of the concerns around their adoption?
  • How do such tools integrate with other enterprise technologies, such as Big Data?
  • What role do ethics play in developing such systems?

Speaker Info: Sheldon Fernandez

VP of Engineering, Infusion

Sheldon Fernandez is the VP of Engineering for Infusion, a digital transformation and consultancy firm that was acquired by Avanade in February of this year. During the last 16 years, Sheldon helped grow the company from 6 people to over 650 with offices around the world. In the formative years, he served as the company’s chief architect, and was been responsible for honing the firm’s technical direction amidst ongoing industry changes. Additionally, Sheldon served as Chief Technical Officer for the noteworthy Infusion spin-off, PersonIf (www.personif.com) a ‘people discovery’ platform that radically changed the way the entertainment industry locates talent. In addition to winning Microsoft’s cloud solution of the year award in 2012, the technology was used as the casting engine for The Glee Project, X-Factor, American Idol and Sports Illustrated. Finally, Sheldon has coupled his entrepreneurial endeavours and engineering exploits with numerous non-technical pursuits. He completed a Master’s degree in theology at the University of Toronto in 2008, and pursued thesis work in the area of neuroscience and metaethics. He has anchored his humanitarian sensitives with the African Jesuits Aids Network (AJAN) in Nairobi, where he assisted with infrastructure projects and HIV/AIDS education. He also pursued creative writing at Oxford University to hone his interests with literary craft, and finally, completed a professional training program at the Montreal Institute for Genocide Studies in mass atrocity prevention.

Artificial Intelligence in Medical Imaging

Abstract: Artificial Intelligence in medical imaging is going through great changes with tremendous new opportunities showing up. The rise of machine learning especially deep learning, the rise of big data analytics and the rise of cloud computing, have brought wonderful opportunities to invent the next generation of AI techniques, not only to solve new problems appearing, but also to solve many years challenges in conventional medical image analysis with much more satisfactory real-time solutions. This talk will share our view and experience on developing the state-of-art new generation of AI to help physicians, hospital administrative to analyze the huge growing medical data and help them to make the right decision and early decision.

Speaker Info: Shuo Li

Director, Digital Imaging Group of London

Dr. Shuo Li is the director of Digital Imaging Group (DIG) of London, an associate professor in the department of medical imaging and medical biophysics at the University of Western Ontario and scientist in Lawson Health Research Institute. Before this position, he was a research scientist and project manager in General Electric (GE) Healthcare for 9 years. ​He founded the DIG (http://digitalimaginggroup.ca/) since 2006, which is a highly dynamic and multiple disciplinary group. He received his Ph.D. degree in computer science from Concordia University 2006, where his Ph.D. thesis won the doctoral prize giving to the most deserving graduating student in the faculty of engineering and computer science. He has published over 100 publications; He is the recipient of several GE internal awards; He serves as guest editors and associate editor in several prestigious journals in the field; He serves as program committee members in highly influential conferences; He is the editors of six books. His current interest is development intelligent analytic tools to help physicians and hospital administrators to handle the big medical data, centered with medical images.

The Rise of Robot Doctors?

Abstract: The ubiquity and accessibility of computing power has given rise to its use in many data-rich applications and sectors. One example of these applications is medicine which has seen a surge of efforts in implementation of machine learning to automate the diagnostic process. It is true that some fields of medicine have substantially benefited from such implementation of machine learning. Advanced algorithms have been utilized to observe and detect features that are not readily accessible to naked eyes, differentiate between similar cases resulting in different patient outcome, predicting immanent catastrophic health events, and in general assimilating a large volume of seemingly unrelated data to extract meaningful information that could then aid a physician in performing clinical diagnosis. In this talk, I will outline some of the innovative ways that artificial intelligence is being used in medicine to do some truly amazing things. While these approaches may often times seem far superior to a trained human physician in terms of diagnostic performance, I will argue that we will not be getting our medical diagnosis from a robot wearing a lab coat any time soon!

Speaker Info: Farnoud Kazemzadeh

CEO, Elucid Labs

Dr. Farnoud Kazemzadeh is formally trained as an astrophysicist with a B.Sc. degree from the University of Waterloo. He pursued a M.Sc. in Space Sciences at the International Space University in France. There, he found his passion in optics and photonics and returned to the University of Waterloo to complete his M.A.Sc. and Ph.D. degrees in optical engineering. He is an internationally recognized leading expert in optics and photonics specializing in interferometry, spectroscopy, holography, and microscopy. As a scientist, he has placed his instruments on various telescopes, planetary probes, and the International Space Station. As a serial entrepreneur, he has successfully lead the design and development of high-performance optical devices for several startups. He is an Adjunct Professor in the Department of Systems Design Engineering at the University of Waterloo and is the co-founder and CEO of Elucid Labs, a medical technology company that harnesses the power of artificial intelligence and computational imaging for various applications in medicine.

CVIS 2016

Review of interventional and point-of-care imaging

Abstract: As a serial entrepreneur and the president of Synaptive Medical, a company dedicated to developing technologies with an impact to change the standard of care in neurosurgery, Cameron has solid knowledge and expertise in understanding rules of the road for medtech-based entrepreneurship. In this lecture, Cameron shares insights on key strategic trends, changing dynamics in the medical devices industry and a discussion of the importance of new imaging technology in this expanding field. The lecture describes technology trends in this field, specifically the expanding use of optical imaging and magnetic resonance imaging to gather quantitative information at specific points-of-care in the patient care cycle.

Speaker Info: Cameron Piron

President and Co-founder, Director

Synaptive is led by Cameron Piron, President and cofounder. Cameron Piron is an industry-recognized leader and innovator in the field of image guided surgery. Prior to co-founding Synaptive Medical, Cameron was the President and co-founder of Sentinelle Medical, a company that developed and manufactured advanced MRI-based breast imaging technologies that grew to over 200 employees and $20M+ revenues before it was acquired by Hologic, Inc. in 2010. Cameron studied systems design engineering at the University of Waterloo, followed by graduate degree at the University of Toronto in medical biophysics. Cameron has received a number of prestigious awards, including R&D Magazine’s Innovator of the Year – the first Canadian ever to win – and the Premier’s Catalyst Award for Best Young Innovator, both in 2008. In 2009, he also received an Alumni Achievement medal from the University of Waterloo for his innovative leadership of Sentinelle Medical in the research and manufacture of leading-edge MRI technologies that allow physicians to diagnose breast cancer and other medical conditions more quickly and accurately. He was also named one of Canada’s Top 40 Under 40™ in 2009, a list established by Caldwell Partners to celebrate the achievements of young Canadians in the private, public and not-for-profit sectors. Most recently Cameron has been awarded Most Creative 100 People of 2015 by FastCompany magazine, and Rodale 100 list for 2016, awarding contributors to the field of medicine and wellness.

Adaptive Single-View 3D Scene Analysis

Abstract: Computer vision systems are beginning to rival human vision for certain tasks. However, there are still many ways in which human and machine vision systems are divergent. While most single-view computer vision algorithms are 2D in nature, humans find it almost impossible to see in 2D: an innate 3D sense seems to provide the perceptual scaffolding for our understanding of the world around us. A second divergence concerns the way we learn. Most computer vision learning is supervised, and the resulting systems, once trained, do not adapt. Human perception, on the other hand, appears to be extremely adaptive over many time scales. In this talk James will describe recent research in our lab on single-view scene analysis that attempts to embrace these principles of 3D adaptive perception. Specifically he will review how these principles have led to recent progress on the problems of online camera pose estimation, roadway analytics and crowd estimation.

Speaker Info: James Elder

Professor, Department of Electrical Engineering and Computer Science, York University

James Elder received his PhD in Electrical Engineering from McGill University in 1996. He is a member of the Centre for Vision Research and a Professor in the Departments of Electrical Engineering and Computer Science and Psychology at York University. Dr. Elder’s research has won a number of awards and honours, including the Premier’s Research Excellence Award and the Young Investigator Award from the Canadian Image Processing and Pattern Recognition Society. Dr. Elder’s research goal is to develop novel and useful computer vision algorithms and machine vision systems through a better understanding of visual processing in biological systems.