Kristen Grauman Interview – Embodied Visual Learning – YouTube

Spread the love

Kristen Grauman Interview - Embodied Visual Learning

This week on the podcast we’re featuring a series of conversations from the AWS re:Invent conference in Las Vegas. I had a great time at this event getting caught up on the latest and greatest machine learning and AI products and services announced by AWS and its partners. This time around we’re joined by Kristen Grauman, a professor in the department of computer science at UT Austin.

Kristen specializes in Computer Vision and joined me leading up to her talk at the Deep Learning Summit “Learning where to look in video”. Kristen & I cover the details from her talk, like exploring how a vision system can learn how to move and where to look. Kristen considers how an embodied vision system can internalize the link between “how I move” and “what I see”, explore policies for learning to look around actively, and learn to mimic human videographer tendencies, automatically deciding where to look in unedited 360 degree video.

The notes for this show can be found at twimlai.com/talk/85.
For series details, visit twimlai.com/reinvent.

Copyright © 2018 NEURALSCULPT