Image for post
Image for post

Every week, Invector Labs publishes a newsletter that covers the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

Image for post
Image for post

From the Editor: The Biggest Natural Language in History

When comes to natural language, bigger is simply better. It has been proven that large models with lots of parameters perform better than ensembles of models optimized for individual tasks. Last year, Google and OpenAI reached big milestones with the release of models like BERT and GPT-2 respectively. Earlier last week, Microsoft shocked the artificial intelligence(AI) community by unveiling the Turing Natural Language Generation(T-NLG) model which uses an astonishing 17 billion parameters for training, making the largest natural language model in history.

Microsoft’s achievement represents a major milestone in the evolution of conversational applications. From question-answering to text-generation, the T-NLG model can perform at human level. Microsoft had to developed new technologies in order to train T-NLG at that scale. Given its size, the cost of using something like T-NLG still results prohibited for most organizations but shouldn’t be long before that problem gets solved. Without a doubt, T-NLG continues to be one of the most exciting areas of the AI ecosystem.

Now let’s take a look at the core developments in AI research and technology this week:

AI Research

The Biggest Language Model in AI History

As part of Project Turing, Microsoft Research released T-NLG a language model that uses 17 billion parameters.

>Read more in this blog post from Microsoft Research

Uber Backtesting Service

Uber published an insightful blog post describing a new service they built to backtest models at scale.

>Read more in this blog post from the Uber engineering team

Fixing Wikipedia

AI researchers from MIT have developed a new text generation model that can fix outdated Wikipedia articles while maintaining the original language.

>Read more in this article in MIT News

Cool AI Tech Releases

DeepSpeed and ZeRO

Microsoft open sourced the DeepSpeed framework and the ZeRO optimizers which were the key technologies for training the largest natural language in history.

>Read more in this blog post from Microsoft Research

A Multilingual Dataset for Training QA AI Agents

Google released TyDi, a new dataset for training question-answering agents in multiple languages.

>Read more in this blog post from Google Research

AutoClip

Google open sourced AutoClip, a framework for intelligent video reframing.

>Read more in this blog post from Google Research

AI in the Real World

Sony is Betting Big in Reinforcement Learning

Sony is leveraging reinforcement learning to build better video games characters and even smarter cameras.

>Read more in this coverage by Wired

Deep Instinct Raises Big

Deep learning company Deep Instinct has raised $43 million to build deep learning models that can prevent sophisticated cyber attacks.

>Read more in this coverage from VentureBeat

RoboThor

The Allen Institute of Technology announced a new challenge called RoboThor to crowsourced better navigation algorithms.

>Read more in this coverage from MIT Technology Review

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store