Last Week in AI
Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:
From the Editor
Interpretability remains one of the biggest challenges of machine learning applications. The famous tradeoff between interpretability and accuracy tells us that highly effective AI models are notoriously hard to analyze and understand. Without understanding the behavior of AI models, how can we possibly audit them and troubleshoot them effectively. The challenge of interpretability is even larger in models that deal with highly unstructured data such as image classifiers.
This week, AI researchers from Google and OpenAI published a ground breaking method that attempts to improve the interpretability of deep neural networks for image classifications. Called activation atlases, the new technique allow us to understand how image classifiers build their internal knowledge. In other words, we can “see what the neural networks sees”. Even though activation atlases is still highly experimental, some of the concepts seem incredibly robust and promising. Certainly an important step towards improving the interpretability of deep learning models.
Now let’s take a look at the core developments in AI research and technology this week:
Research
OpenAI and Google collaborated on a new technique for interpretability in deep neural networks for image classification
>Read more in this blog post from OpenAI
Microsoft AI researchers published a new model and public dataset for weather forecasting.
>Read more in this blog post from Microsoft Research
Google published a research paper proposing a new method based on recurrent neural networks for handwriting analysis.
>Read more in this blog post from Google AI Research
Cool Tech Releases
DeepMind open sourced TF-Replicator, a library that streamline the deployment of distributed machine learning workflows
>Read more in this blog post from DeepMind
Google introduces GPipe, an open source library for training machine learning models at scale.
>Read more in this blog post from the Google AI team
Uber engineers pioneered a machine learning technique called capacity safety that forecasts the capacity needs of their microservices infrastructure.
>Read more in this blog post from the Uber engineering team
AI in the Real World
A team of Canadian AI researchers trained an AI model to predict Alzheimer’s disease.
>Read more in this coverage from UnDark
The New York Times discusses a new machine learning model used to create personalized diets.
>Read more in this article from the New York Times
AI is revolutionizing how video games are created and played.