Every week, Invector Labs publishes a newsletter that covers the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:
From the Editor: The Biggest Natural Language in History
When comes to natural language, bigger is simply better. It has been proven that large models with lots of parameters perform better than ensembles of models optimized for individual tasks. Last year, Google and OpenAI reached big milestones with the release of models like BERT and GPT-2 respectively. Earlier last week, Microsoft shocked the artificial intelligence(AI) community by unveiling the Turing Natural Language Generation(T-NLG) model which uses an astonishing 17 billion parameters for training, making the largest natural language model in history.
Microsoft’s achievement represents a major milestone in the evolution of conversational applications. From question-answering to text-generation, the T-NLG model can perform at human level. Microsoft had to developed new technologies in order to train T-NLG at that scale. Given its size, the cost of using something like T-NLG still results prohibited for most organizations but shouldn’t be long before that problem gets solved. Without a doubt, T-NLG continues to be one of the most exciting areas of the AI ecosystem.
Now let’s take a look at the core developments in AI research and technology this week:
The Biggest Language Model in AI History
As part of Project Turing, Microsoft Research released T-NLG a language model that uses 17 billion parameters.
Uber Backtesting Service
Uber published an insightful blog post describing a new service they built to backtest models at scale.
AI researchers from MIT have developed a new text generation model that can fix outdated Wikipedia articles while maintaining the original language.
Cool AI Tech Releases
DeepSpeed and ZeRO
Microsoft open sourced the DeepSpeed framework and the ZeRO optimizers which were the key technologies for training the largest natural language in history.
A Multilingual Dataset for Training QA AI Agents
Google released TyDi, a new dataset for training question-answering agents in multiple languages.
Google open sourced AutoClip, a framework for intelligent video reframing.
AI in the Real World
Sony is Betting Big in Reinforcement Learning
Sony is leveraging reinforcement learning to build better video games characters and even smarter cameras.
Deep Instinct Raises Big
Deep learning company Deep Instinct has raised $43 million to build deep learning models that can prevent sophisticated cyber attacks.
The Allen Institute of Technology announced a new challenge called RoboThor to crowsourced better navigation algorithms.