Image for post
Image for post

Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

Image for post
Image for post

From the Editor: The Friction Between Interpretability and Accuracy in Deep Learning

One of the biggest challenges of building deep learning models is trying to understand how they arrive to conclusions. The emergence of deep learning increased the level of complexity of traditional machine learning models by a significant multiple. Today, it is common to encounter fairly simple neural networks with millions of nodes and hundreds of hidden layers. Navigating those complex structures to interpret the decisions of a model is nearly impossible. Deep learning theory often refers to this phenomenon as the accuracy-interpretability friction.

The idea of the accuracy-interpretability friction is very simple. Models that are simple to interpret tend to not perform well in sophisticated environments while more robust models are nearly impossible to interpret. As deep learning evolves, there have been an increasing need to develop tools that improve the interpretability and visualization of deep learning models. This week, IBM released a new toolkit with that sole purpose. Certainly, interpretability will be at the center of the next decade of deep learning innovation.

Now let’s take a look at the core developments in AI research and technology this week:

AI Research

MIT unveiled a new machine learning models that automates the annotation of massive datasets for medical research.

>Read more in this article from MIT News

Google published a paper outlining a technique known as temportal-cycle consistency learning, which can simplify the training of video analysis models.

>Read more in this blog post from Microsoft Research

DeepMind disclosed some of their recent work in deep learning models and datasets for ecological research.

>Read more in this blog post from the DeepMind team

Cool AI Tech Releases

IBM released AI Explainability 360, a new toolkit to improve the interpretability of machine learning models.

>Read more in this blog post from IBM Research

Microsoft and Carnegie Mellon University announced the MineRL competition which looks to leverage Minecraft-based Project Malmo to advance reinforcement learning solutions.

>Read more in this blog post from Microsoft Research

Google released EfficientNet-EdgeTPU, a family of image classification models based on AutoML and optimized for Google’s Edge TPU.

>Read more in this blog post from the Google Research team

AI in the Real World

U.K.’s National Health Service (NHS) is building a new unit to tackle AI health care challenges.

>Read more about it in this coverage from VentureBeat

The US intelligence revealed a project called Sentient that has been described as an artificial brain.

>Read more about it in this coverage from The Verge

MIT published an interesting analysis about China’s brain drain when comes to AI talent.

>Read more about it in this article from MIT Technology News

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store