Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

From the Editor

What’s the best deep learning framework for the job? That’s a question that constantly torments data science teams working in real world implementations. The large number of deep learning stacks in the market result overwhelming even for the best technologists. Google is throwing a lot of weight behind TensorFlow, with Facebook’s backing, PyTorch is quickly becoming one of the popular stacks in the market, Amazon’s placed its bets behind MxNet and several other deep learning stacks such as Caffe2 and Keras have seen relevant traction as well.

At Invector Labs, in 2018 we saw more implementations of TensorFlow, MxNet and PyTorch than all the other frameworks combined. If that’s any statistical representation of the market, it seems that the backing of Google, Facebook and Amazon respectively is having an impact in the market.

The truth is that no deep learning stack is universally better than the others. Frameworks that are great in production like TensorFlow or Caffe2 are not as good for experimentation as PyTorch. Stacks like MxNet excel in the AWS cloud which is the most widely adopted deep learning runtime for deep learning solutions while TensorFlow is the best performant in runtimes like Apache Spark. Furthermore, most medium to large size data science environment end up requiring more than one deep learning framework. I guess the most important thing to consider for data science teams is not to select the best deep learning stack but to create the infrastructure in which diverse frameworks can be utilized effectively. Easier said than done though 😊.

Now let’s take a look at the core developments in AI research and technology this week:


Researchers from Google Brain team published a paper introducing a method called Graph2Vec to acquire object-centric representations for robotic manipulation tasks.

>Read more in this blog post from the Google AI team

OpenAI published a research study that proposes a new statistical method to measure how training of AI agents should scale.

>Read more in this blog post from OpenAI

Researchers from IBM’s Zurich Lab published a paper proposing a technique to estimate the performance of a neural network prior to training.

>Read more in this blog post from IBM Research

Cool Tech Releases

Facebook open sourced PyText, a PyTorch-based framework for faster natural language processing development.

>Read more in this blog post from the Facebook engineering team

Microsoft Research’s Montreal Lab announced a competition for data science teams to solve text-based games using the newly announced Text-World framework.

>Read more in this blog post from Microsoft Research

Face joined the MLPerf initiative and contributed MaskRCNN2GO, a computer vision technique optimized for mobile devices.

>Read more in this blog post from the Facebook engineering team

AI in the Real World

The prestigious Harvard Magazine published a comprehensive analysis of the relationship between ethics and AI.

>Read the entire article at Harvard Magazine online

Members of the U.S intelligence community catalogued AI as an emerging threat to national security.

>Read more about it in this coverage from TechCrunch

A recent study from New York University showed that AI agents can be used to fool biometric security systems.

>Read more in this coverage from Fortune Magazine

CEO of IntoTheBlock, Chief Scientist at Invector Labs, I write The Sequence Newsletter, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.