Events

  1. Events
  2. 2018
  3. September
  4. Building Safe Artificial Intelligence with OpenMined

Sep

26

Wednesday

Building Safe Artificial Intelligence with OpenMined

6.00pm

Seminars

Staff, Students, Academics

Research Centre: Research Centre for Machine Learning

Speaker: Andrew Trask

Title: Building Safe Artificial Intelligence with OpenMined

Abstract:

In this talk, you will learn about some of the most important new techniques in secure, privacy preserving, and multi-owner governed Artificial Intelligece. The first section of the talk will present a sober, up-to-date view of the current state of AI safety, user privacy, and AI governance. Andrew will then continue to introduce several fundamental tools of technical AI safety: Homomorphic Encryption, Secure Multi-Party Computation, Federated Learning, and Differential Privacy. The talk will finish with an exciting demo from the OpenMined open-source project showing how to train a deep neural network while both the training data AND model are in a safe, encrypted state during the entire process.

Bio:

Andrew Trask is a PhD student at the University of Oxford where he researches new techniques for technical AI safety. With a passion for making complex ideas easy to learn, he is also the author of the book Grokking Deep Learning, an instructor in Udacity's Deep Learning Nanodegree, and he authors a popular Deep Learning blog iamtrask.github.io. He is also the leader of the OpenMined open-source community, a group of over 3000 researchers, practitioners, and enthusiasts which extends major Deep Learning frameworks with open-source tools for technical AI safety (openmined.org).

Slides from this seminar can be found here.

Share this event

When & where

6.00pm - 7.00pmWednesday 26th September 2018

AG21 College Building City, University of London St John Street London EC1V 4PB United Kingdom