Member-only story

Last Week in AI

Jesus Rodriguez
3 min readAug 11, 2019

--

Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor: The Friction Between Interpretability and Accuracy in Deep Learning

One of the biggest challenges of building deep learning models is trying to understand how they arrive to conclusions. The emergence of deep learning increased the level of complexity of traditional machine learning models by a significant multiple. Today, it is common to encounter fairly simple neural networks with millions of nodes and hundreds of hidden layers. Navigating those complex structures to interpret the decisions of a model is nearly impossible. Deep learning theory often refers to this phenomenon as the accuracy-interpretability friction.

The idea of the accuracy-interpretability friction is very simple. Models that are simple to interpret tend to not perform well in sophisticated environments while more robust models are nearly impossible to interpret. As deep learning evolves, there have been an increasing need to develop tools that improve the interpretability and visualization of deep learning models. This week, IBM released a new toolkit with that sole purpose…

--

--

Jesus Rodriguez
Jesus Rodriguez

Written by Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, President of NeuralFabric and founder of The Sequence , Lecturer at Columbia University, Wharton, Angel Investor...

No responses yet