The Sequence Scope: A Model Compression Library You Need to Know About
Weekly newsletter with over 120,000 subscribers that discusses impactful ML research papers, cool tech releases, the money in AI, and real-life implementations.
--
📝 Editorial: A Model Compression Library You Need to Know About
The machine learning (ML) space is currently dominated by large models that often have computation requirements impossible for most organizations. Model compression is one of the disciplines that has been targeting that challenge by creating smaller models without sacrificing accuracy. Despite the obvious need, model compression remains a challenge for ML engineering teams as most frameworks in the space are relatively nascent. As a result, you rarely hear about ML engineering pipelines that incorporate model compression as a native building block. Quite the opposite, model compression tends to be one of those things that you only consider once the problem is too big to ignore; literally 😉
Last week, Microsoft Research open-sourced a new framework that attempts to streamline compression in deep learning models. DeepSpeed Compression is part of the DeepSpeed platform aimed to address the challenges of large-scale AI systems. The framework provides a catalog of common model compression techniques abstracted using a consistent programming model. The initial experiments showed up to 32x compression rates in large transformer architectures such as BERT. If DeepSpeed Compression follows the path to other frameworks in the DeepSpeed family, it could be productized as part of the Azure ML platform and streamline the adoption of compression methods in deep learning architectures. DeepSpeed Compression is definitely a framework to follow by the ML engineering community.