Sep
26
Wednesday
Building Safe Artificial Intelligence with OpenMined
Research Centre: Research Centre for Machine Learning
Speaker: Andrew Trask
Title: Building Safe Artificial Intelligence with OpenMined
Abstract:
In this talk, you will learn about some of the most important new techniques in secure, privacy preserving, and multi-owner governed Artificial Intelligece. The first section of the talk will present a sober, up-to-date view of the current state of AI safety, user privacy, and AI governance. Andrew will then continue to introduce several fundamental tools of technical AI safety: Homomorphic Encryption, Secure Multi-Party Computation, Federated Learning, and Differential Privacy. The talk will finish with an exciting demo from the OpenMined open-source project showing how to train a deep neural network while both the training data AND model are in a safe, encrypted state during the entire process.
Bio:
Andrew Trask is a PhD student at the University of Oxford where he researches new techniques for technical AI safety. With a passion for making complex ideas easy to learn, he is also the author of the book Grokking Deep Learning, an instructor in Udacity's Deep Learning Nanodegree, and he authors a popular Deep Learning blog iamtrask.github.io. He is also the leader of the OpenMined open-source community, a group of over 3000 researchers, practitioners, and enthusiasts which extends major Deep Learning frameworks with open-source tools for technical AI safety (openmined.org).
Slides from this seminar can be found here.