[cid:image005.png@01D357B6.5B82A7B0]
[cid:image008.png@01D10810.C2D01210]
[cid:image008.png@01D357B6.5B82A7B0]
Hello TUX!
A reminder that we have a Sanders Series Lecture by Saleema Amershi of Microsoft today.
We look forward to seeing you there!
Ali, Fraser, Daniel and Tovi
Sanders Series Invited Lecture – Saleema Amershi:
Toward Responsible AI by Planning to Fail
February 11, 2020. MaRS Auditorium @ 661 University Avenue, Toronto, ON.
Lunch reception begins at 12:00 pm. Presentation begins at 1:00 pm.
[cid:image002.jpg@01D5E0BA.A7ADECF0]
Abstract
The potential for AI technologies to enhance human capabilities and improve our lives is of little debate; yet, neither is their potential to cause harm and social disruption. While preventing or minimizing AI biases and harms is justifiably the subject of intense study in academic, industrial and even legal communities, an approach centered on acknowledging and planning for AI-based failures has the potential to shed new light on how to develop and deploy responsible AI-based systems.
In this talk, I will discuss the sociotechnical nature of several inherent and unavoidable AI failures and why it is important for the industry to systematically and proactively identify, assess, and mitigate harms caused by such failures in our AI-based products and services. I will then present Microsoft’s recently released Guidelines for Human-AI Interaction and how we’ve been using them at Microsoft to help teams think through and prepare for different types of AI failures.
Bio
Saleema Amershi<http://research.microsoft.com/~samershi> is a Principal Researcher at Microsoft Research AI and currently chairs Microsoft’s Aether Working Group on Human-AI Interaction and Collaboration. Aether is Microsoft’s advisory committee on responsible and ethical AI. Saleema’s research focuses on helping people create effective and responsible AI user experiences. Her recent work includes leading Microsoft’s effort to develop general Guidelines for Human-AI Interaction<https://aka.ms/aiguidelines>, a unified and validated set of guidelines to establish a foundation for human-AI interaction design. Throughout the years, she has developed tools and methodologies to support practitioners in designing and building AI-based products and services, including general purpose platforms and visualizations for data scientists building predictive models, and application specific techniques for supporting end-users interacting with AI-systems in their everyday lives.
Saleema holds a PhD in Computer Science & Engineering from the Paul G. Allen School at the University of Washington. Prior to UW she completed an MSc in Computer Science and a BSc in Computer Science & Mathematics from the University of British Columbia.
[cid:image009.png@01D357B6.5B82A7B0]
OUR SPONSORS: [cid:image006.jpg@01D5E0BB.57331790]
TUX is made possible by the support of our sponsors, Steven Sanders, Autodesk,
University of Toronto Department of Computer Science, and Chatham Labs.
A reminder that next week Saleema Amershi is the featured speaker at the TUX Talk in the MaRS Auditorium (lower level).
The talk commences at 1:00 p.m. and lunch is available from 12:00 noon onwards.
Please contact me if you have any questions.
Saleema Amershi:
Toward Responsible AI by Planning to Fail
2020-02-11 12:30 at 661 University Avenue, Toronto, ON M5G 1M1<http://www.tux-hci.org/>
[cid:image005.jpg@01D5DA88.AFF71270]
Abstract
The potential for AI technologies to enhance human capabilities and improve our lives is of little debate; yet, neither is their potential to cause harm and social disruption. While preventing or minimizing AI biases and harms is justifiably the subject of intense study in academic, industrial and even legal communities, an approach centered on acknowledging and planning for AI-based failures has the potential to shed new light on how to develop and deploy responsible AI-based systems.
In this talk, I will discuss the sociotechnical nature of several inherent and unavoidable AI failures and why it is important for the industry to systematically and proactively identify, assess, and mitigate harms caused by such failures in our AI-based products and services. I will then present Microsoft's recently released Guidelines for Human-AI Interaction and how we've been using them at Microsoft to help teams think through and prepare for different types of AI failures.
Bio
Saleema Amershi<http://research.microsoft.com/~samershi> is a Principal Researcher at Microsoft Research AI and currently chairs Microsoft's Aether Working Group on Human-AI Interaction and Collaboration. Aether is Microsoft's advisory committee on responsible and ethical AI. Saleema's research focuses on helping people create effective and responsible AI user experiences. Her recent work includes leading Microsoft's effort to develop general Guidelines for Human-AI Interaction<https://aka.ms/aiguidelines>, a unified and validated set of guidelines to establish a foundation for human-AI interaction design. Throughout the years, she has developed tools and methodologies to support practitioners in designing and building AI-based products and services, including general purpose platforms and visualizations for data scientists building predictive models, and application specific techniques for supporting end-users interacting with AI-systems in their everyday lives.
Saleema holds a PhD in Computer Science & Engineering from the Paul G. Allen School at the University of Washington. Prior to UW she completed an MSc in Computer Science and a BSc in Computer Science & Mathematics from the University of British Columbia.
With thanks,
Anita