Member-only story
Last Week in AI
Every week, Invector Labs publishes a newsletter that covers the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:
From the Editor: Visualizing Neural Networks
Interpretability remains one of the biggest challenges in modern machine learning. Disciplines such as deep learning have increase the sophistication of neural networks but that sophistication has introduced challenges in terms of understanding how these systems make decisions. The accuracy-interpretability dilemma is at the center of the evolution deep learning. That dilemma describes the friction between being able to accomplish complex knowledge tasks and understanding how those tasks were accomplished. In essence, interpretable models are not very accurate and accurate models tend to be hard to understand.
In addition to the interpretability-accuracy friction, understanding deep learning models requires a new generation of debugging tools. As a result of these challenges, data scientists typically rely on visualization tools to understand the decision making process of deep learning models. This week we saw a major release in this area when OpenAI open sourced Microscope and the Lucid library, two efforts focused on creating visual representations of…