Technology Fridays: AWS SageMaker Provides End-to-End Lifecycle Management to Your ML Solution

Welcome to Technology Fridays! The race between Amazon, Microsoft and Google for dominating the cloud machine learning platform space has been heating up. The cloud incumbents are trying to become the preferred home of machine learning solutions and they have been literally matching each other’s capabilities feature by feature as well their pricing models. That competition has helped to increase the level of innovation in the machine learning space. Today, I would like to discuss AWS’ latest addition to its machine learning suite and one that might become the cornerstone of the platform.

SageMaker attempts to address a fundamental challenge of machine learning applications by providing end-to-end lifecycle management capabilities using a consistent infrastructure. SageMaker is a native cloud service that allows data scientists to create machine learning models using familiar tools and deploy them to a production-ready, scalable infrastructure. You can think about SageMaker as a complement to the original AWS ML platform with added capabilities focused on the lifecycle of machine learning models.

In order to enable the creation of data science models., SageMaker leverages the popular Jupyter stack. That strategy allows data scientists to create interactive notebooks that explore and interact with registered data sources without the need of provisioning any backend infrastructure.

One of the key benefits of sageMaker is the support for multiple deep learning frameworks. Data scientists can implement models that leverage stacks such as TensorFlow, MxNet, PySpark and several pothers. SageMaker provides a robust support for Apache Spark as well as related technologies.

Model deployment and hosting are other areas of strength of SageMaker. Amazon SageMaker Hosting Services enables the deployment of machine learning models by packaging them as Docker containers. References to data sources are stored in AWS data services such as S3. Deployment can take place across different AWS availability zones to guarantee availability and scalability. After a model is deployed, sageMaker creates different HTTPs endpoints that can be used by third party applications to configure and interact with the model. the programmable interfaces can also be used to train the models with new datasets.

SageMaker can be effectively used to cleanse and enable data transformation in datasets. This enables data scientists to create new representations that can be reused across models. Developers using SageMaker can build their own algorithms or reuse built-in ones available in the platform’s repository. Similarly, data scientists can create new models by assembling layers of existing algorithms.

SageMaker provides seamless interoperability with dozens of services in the AWS platform which enables the implementation of highly sophisticated machine learning applications. Areas such as access control, data privacy or monitoring are direct beneficiaries of that level of integration.

Competition?

AWS SageMaker competes with the cloud machine learning offerings from Azure and Google Cloud. Microsoft in particular has been very active in the space with the release of new technologies such as the Workbench, Model Management and Experimentation services. Innovative startups such as DataBricks and Algorithmia with the recently announced CODEX platform are also relevant when comes to providing end-to-end lifecycle management in machine learning applications.

Written by

CEO of IntoTheBlock, Chief Scientist at Invector Labs, Guest lecturer at Columbia University, Angel Investor, Author, Speaker.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store