Moustapha Cissé Interview – Security and Safety in AI: Adversarial Examples, Bias and Trust

Moustapha Cissé Interview - Security and Safety in AI: Adversarial Examples, Bias and Trust

In this episode I’m joined by Moustapha Cissé, Research Scientist at Facebook AI Research Lab (or FAIR) Paris.

Moustapha’s broad research interests include the security and safety of AI systems, and we spend some time discussing his work on adversarial examples and systems that are robust to adversarial attacks. More broadly, we discuss the role of bias in datasets, and explore his vision for models that can identify these biases and adjust the way they train themselves in order to avoid taking on those biases.

Be sure to check out some of the great names that will be at the AI Conference in New York, Apr 29–May 2, where you’ll join the leading minds in AI, Peter Norvig, George Church, Olga Russakovsky, Manuela Veloso, and Zoubin Ghahramani. Explore AI’s latest developments, separate what’s hype and what’s really game-changing, and learn how to apply AI in your organization right now. Save 20% on most passes with discount code PCTWIML at twimlai.com/ainy2018. Early price ends February 2!

The notes for this show can be found at https://twimlai.com/talk/108.
For complete contest details, visit https://twimlai.com/myaicontest.
For complete series details, visit https://twimlai.com/blackinai2018.

Subscribe!
iTunes ➙ https://itunes.apple.com/us/podcast/this-week-in-machine-learning/id1116303051?mt=2
Soundcloud ➙ https://soundcloud.com/twiml
Google Play ➙ http://bit.ly/2lrWlJZ
Stitcher ➙ http://www.stitcher.com/s?fid=92079&refid=stpr
RSS ➙ https://twimlai.com/feed
Subscribe to our newsletter! ➙ https://twimlai.com/newsletter

Lets Connect!
Twimlai.com ➙ https://twimlai.com/contact
Twitter ➙ https://twitter.com/twimlai
Facebook ➙ https://Facebook.com/Twimlai
Medium ➙ https://medium.com/this-week-in-machine-learning-ai

Author:

Just a figment of your imagination.