The Sequence Scope: Google’s Big ML Week

Weekly newsletter with over 120,000 subscribers that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.

Jesus Rodriguez
4 min readMay 15, 2022

--

📝 Editorial: Google’s Big ML Week

For years, Google I/O have been one of the most exciting conferences in the tech word given the number of exciting products that are regularly unveiled at this event. The 2022 edition of Google I/O took place last week and machine learning (ML) was front and center. Just like Microsoft’s Ignore AWS re:Invent, I/O provides a first row seat to the ML innovation happening at Google and the new additions to its ML stack.

This year’s edition of I/O was packed with ML announcements across software, hardware, and research. On the hardware and infrastructure front, Google announced the general availability of its Cloud TPU VMs as well as what can be considered the biggest ML compute cluster available. The new Cloud ML Hub boosts an astonishing 9 exaflops computation power. On the software side, Google announced support for 24 new low-resource languages in Google Translate, new ML capabilities for Google Maps as well as new libraries added to TensorFlow. Google also made available new versions of the LaMDA (Language Model for Dialog Applications) and Pathways Language Model (PaLM) which power systems such as the Google Assistant. Another interesting release was the AI Test Kitchen that provides users with an interactive experience to explore the capabilities of these models. Finally, there was an unexpected announcement of a new form of augmented reality glasses that leverage sophisticated computer vision and language models.

I/O 2022 provided a glimpse of Google’s investments in ML research and technology. While there were no major new product lines announced, these incremental releases should consolidate Google’s positions as one of the main ML platform ecosystems on the market.

🗓 Next week in TheSequence Edge:

Edge#191: we discuss the fundamental enabler of distributed training: message passing interface (MPI); +Google’s paper about General and Scalable Parallelization for ML Computation

--

--

Jesus Rodriguez

CEO of IntoTheBlock, President of Faktory, I write The Sequence Newsletter, Guest lecturer at Columbia University and Wharton, Angel Investor, Author, Speaker.