Google and a New Machine Learning Theory of Everything

A new academic paper published by Google has been causing a lot of buzz in the machine learning community. Title “One Model to Learn Them All”, the paper outlines a single machine learning template that can perform different tasks efficiently.

The MultiModel algorithm, as Google researchers call it, can be trained on different deep learning tasks such as object detection, speech translation, language parsing, image recognition and several others. while Google doesn’t portrait MultiModel as a mater algorithm that can learn everything at once, its creation represents a significant milestone for machine learning applications.

Multi-medium learners has long been seen as the holy grail of machine learning. While artificial intelligence(AI) and machine learning technologies has certainly come a long way, most of today’s applications are constrained to models that can only perform a single task. And even those are ridiculously hard to train and optimize! The single-task nature of most AI learning paradigms drastically contrasts with the multi-medium characteristic of human intelligence. Throughout centuries, humans have created knowledge about different domains using heterogeneous mediums such as text, video, pictures, audio and many others. Creating AI agents that can efficiently extract knowledge from different mediums and combine diverse learning paradigms( supervised, unsupervised…) takes us a step closer to simulate human intelligence.

Going back to Google’s paper, the MultiModel algorithms didn’t particularly showed radical improvements over existing approaches but it did highlighted a couple of interesting improvements derived from training machine learning models on several tasks. For instance, Google demonstrated that the MultiModel algorithms improved its accuracy on tasks such as machine translation, speech and parsing when trained on all operations simultaneously compared to models that were just trained on one operations. Additionally, the MultiModel approach seems to require less training data than traditional algorithms in order to achieve similar levels of efficiency.

In the Master Algorithm Achievable?

That question has been at the center of machine learning almost since its inception. The short answer is yes! Most experts seem to agree that all human knowledge can le learned by a single master algorithm. They also agree that, despite the creation of the MultiModel, we are nowhere near the creation of universal learner.

From all the theories about machine learning master algorithms, one of my favorites comes from Pedro Domingos, an AI researcher at the University of Washington. In his book “The Master Algorithm” (what else ;) ) and in several dozens of research papers, Domingos has championed the theory of a universal learner that combines the main (or masters) algorithms of the five lead schools of machine learning: Symbolists which rely on inverse deduction as their core algorithm, Connectionists which use backpropagation, Evolutionaries which derive from genetic programming, Bayesians that live and die by Baye’s theorem and Analogizers that are based on Support Vector Machines. I will deep dive into Domingo’s details of a Mater Algorithm in a future post. For now, Google’s MultiModel takes us a step closer to make that theory a reality.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store