Image for post
Image for post

Every week, Invector Labs publishes a newsletter that covers the most recent developments in AI research and technology. You can find this week’s issue below. You can sign up for it below. Please do so, our guys worked really hard on this:

Image for post
Image for post

From the Editor: Smaller, Faster Machine Learning

The catchy phrase that bigger is better definitely applies when comes to machine learning these days. Bigger bigger models trained on larger datasets have consistently outperformed groups of smaller models specialized on specific tasks. The examples are everywhere, Google’s BERT, OpenAI’s GPT-2, Microsoft’s Turing-NLG operate at scales with computational costs that result prohibited for most organizations. As a result, we are starting to see efforts to create smaller and more efficient machine learning models.

The idea of optimizing the size of a machine learning model without sacrificing its performance is conceptually trivial but really hard to implement in practice. Most large scale machine learning models growth to fairly large sizes during training becoming really hard to understand which sections can be removed without affecting the performance. This week, MIT researchers published a new pruning the size of machine learning models and we are likely to continue seeing more research in this area.

Now let’s take a look at the core developments in AI research and technology this week:

AI Research

Shrinking Deep Neural Networks

Researchers from MIT published a paper that proposed a method for shrinking deep neural networks.

>Read more in this coverage from MIT News

Jukebox

Researchers from OpenAI unveiled Jukebox, a deep neural network that can generate music and lyrics

>Read more in this blog post from OpenAI

Better Loss Functions

Google Research published two paper discussing a method to create a single loss function that can optimize for different tasks.

>Read more in this blog post from Google Research

Cool AI Tech Releases

A New TensorFlow Runtime

TensorFlow open sourced TFRT, a new runtime that provides a consistent infrastructure that maximize performance across different hardware topologies.

>Read more in this blog post from the TensorFlow team

Blender

Facebook open source Blender, an open-domain chatbot that outperforms others in terms of engagements and human-like communication capabilities.

>Read more in this blog post from Facebook Research

Tecton.ai

The team behind Uber’s Michelangelo have launched a new startup to operate machine learning models and they just raised a $20 million Series A.

>Read more in this coverage from TechCrunch

Querying Tables Using Natural Languages

Google open sourced a BERT-based model that process natural language queries against tabular datasets.

>Read more in this blog post from Google Research

AI in the Real World

A Fascinating AI Experiment in the Defense Industry

The US Defense Intelligence Agency showcase an AI model that showed more risk tolerance than humans in the absence of critical data.

>Read more in this coverage from Defense One

AI for Autocompleting Code

AI startup Codota raised $12 million for using AI to autocomplete code.

>Read more in this coverage from TechCrunch

Protecting AI from Adversarial Attacks

Resistant.AI, a startup developing technology to protect AI models from adversarial attacks, just announced a new $2.75 million round.

>Read more in this coverage from VentureBeat

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store