In this episode, I’m joined by David Rosenberg, data scientist in the office of the CTO at financial publisher Bloomberg, to discuss his work on “Extracting Data from Tables and Charts in Natural Document Formats.”
Bloomberg is dealing with tons of financial and company data in pdfs and other unstructured document formats on a daily basis. To make meaning from this information more efficiently, David and his team have implemented a deep learning pipeline for extracting data from the documents. In our conversation, we dig into the information extraction process, including how it was built, how they sourced their training data, why they used LaTeX as an intermediate representation and how and why they optimize on pixel-perfect accuracy. There’s a lot of interesting info in this show and I think you’re going to enjoy it.
The notes for this show can be found at twimlai.com/talk/126.
iTunes ➙ https://itunes.apple.com/us/podcast/this-week-in-machine-learning/id1116303051?mt=2
Soundcloud ➙ https://soundcloud.com/twiml
Google Play ➙ http://bit.ly/2lrWlJZ
Stitcher ➙ http://www.stitcher.com/s?fid=92079&refid=stpr
RSS ➙ https://twimlai.com/feed
Subscribe to our newsletter! ➙ https://twimlai.com/newsletter
Twimlai.com ➙ https://twimlai.com/contact
Twitter ➙ https://twitter.com/twimlai
Facebook ➙ https://Facebook.com/Twimlai
Medium ➙ https://medium.com/this-week-in-machine-learning-ai