Sitemap

Member-only story

Scaling PyTorch Training with Meta’s FairScale

The framework includes some of the most popular techniques for parallelizing the training of neural networks.

3 min readOct 14, 2022
Image Credit: FairScale

I recently started an AI-focused educational newsletter, that already has over 125,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Large neural networks are the norm in the modern deep learning space. Training such large models requires not only a lot of computation power but complex concurrency and parallelization techniques. As a result, we are seeing an emergence of frameworks that attempt to streamline the parallel training of deep learning models. FairScale is a relatively new project in this area that was originally incubated by Meta and eventually open sourced.

FairScale is a PyTorch-based library that combines multiple approaches to scale and parallelize…

--

--

Jesus Rodriguez
Jesus Rodriguez

Written by Jesus Rodriguez

Co-Founder and CTO of Sentora( fka IntoTheBlock), President of LayerLens, Faktory and NeuralFabric. Founder of The Sequence , Lecturer at Columbia, Wharton

No responses yet