Every week, my team at Invector Labs publishes a newsletter to track the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:
From the Editor
What’s the best deep learning framework for the job? That’s a question that constantly torments data science teams working in real world implementations. The large number of deep learning stacks in the market result overwhelming even for the best technologists. Google is throwing a lot of weight behind TensorFlow, with Facebook’s backing, PyTorch is quickly becoming one of the popular stacks in the market, Amazon’s placed its bets behind MxNet and several other deep learning stacks such as Caffe2 and Keras have seen relevant traction as well.
At Invector Labs, in 2018 we saw more implementations of TensorFlow, MxNet and PyTorch than all the other frameworks combined. If that’s any statistical representation of the market, it seems that the backing of Google, Facebook and Amazon respectively is having an impact in the market.
The truth is that no deep learning stack is universally better than the others. Frameworks that are great in production like TensorFlow or Caffe2 are not as good for experimentation as PyTorch. Stacks like MxNet excel in the AWS cloud which is the most widely adopted deep learning runtime for deep learning solutions while TensorFlow is the best performant in runtimes like Apache Spark. Furthermore, most medium to large size data science environment end up requiring more than one deep learning framework. I guess the most important thing to consider for data science teams is not to select the best deep learning stack but to create the infrastructure in which diverse frameworks can be utilized effectively. Easier said than done though 😊.
Now let’s take a look at the core developments in AI research and technology this week:
Researchers from Google Brain team published a paper introducing a method called Graph2Vec to acquire object-centric representations for robotic manipulation tasks.
OpenAI published a research study that proposes a new statistical method to measure how training of AI agents should scale.
Researchers from IBM’s Zurich Lab published a paper proposing a technique to estimate the performance of a neural network prior to training.
Cool Tech Releases
Facebook open sourced PyText, a PyTorch-based framework for faster natural language processing development.
Microsoft Research’s Montreal Lab announced a competition for data science teams to solve text-based games using the newly announced Text-World framework.
Face joined the MLPerf initiative and contributed MaskRCNN2GO, a computer vision technique optimized for mobile devices.
AI in the Real World
The prestigious Harvard Magazine published a comprehensive analysis of the relationship between ethics and AI.
Members of the U.S intelligence community catalogued AI as an emerging threat to national security.
A recent study from New York University showed that AI agents can be used to fool biometric security systems.