Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor: Avoiding Bias in Machine Intelligence Systems

Bias is one of the most common challenges in machine learning applications. In a state of the market dominated by supervised learning models, the bias in the training datasets is likely to manifest into the behavior of machine learning algorithms. This week, Andresen-Horowitz partner Benedict Evans published a very thoughtful analysis of how to mitigate bias in machine learning systems in which he identified three key elements:

  1. Methodological rigor in the collection and management of the training data
  2. Technical tools to analyze and diagnose the behavior of the model.
  3. Training, education and caution in the deployment of machine learning in products.

In my own experience, I always find it useful to think about bias in conjunction with another estimator which if often referred to as variance. Conceptually, the variance of an estimator quantifies how much its value will vary when resampling the dataset. While bias is a learner’s tendency to consistently learn the same wrong thing. Variance is the tendency to learn random things irrespective of the real signal. Why is this important? Well, the optimization of a machine learning model can be seen as the art of reducing bias without increasing variance.

Now let’s take a look at the core developments in AI research and technology this week:

AI Research

Andresen Horowitz partner and rock stack analyst Benedict Evans has a brilliant analysis of bias in AI systems.

>Read the entire blog post here

Facebook AI Research(FAIR) published a new paper proposing a new vision analysis technique to understand both objects and background concepts in images.

>Read more in this blog post from the FAIR team

Google AI researchers published a paper describing a method for producing smaller and faster neural networks.

>Read more in this blog post from Google AI Research

Microsoft Research and Duke University published a new method for training variational auto-encoders which are widely used in deep learning models.

>Read more in this blog post from Microsoft Research

Cool Tech Releases

OpenAI unveiled Arena, an open system for training and playing Dota2 reinforcement learning agents.

>Read more in this blog post from OpenAI

Uber contributed Hudi, a platform for large analytics workloads to the Apache Foundation.

>Read more in this blog post from the Uber engineering team

AI in the Real World

The Massachusetts Institute of Technology unveiled a neural network that can generate summaries of complex science papers

>Read more in this coverage from MIT News

Google expands its AI efforts to Africa with the opening of a new AI center in Ghana.

>Read more in this coverage from CNN

Game builders are using AI to update the graphics of all games and make them look more modern.

>Read more in this article from The Verge

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store